WorldWideScience

Sample records for previously published dataset

  1. Common integration sites of published datasets identified using a graph-based framework

    Directory of Open Access Journals (Sweden)

    Alessandro Vasciaveo

    2016-01-01

    Full Text Available With next-generation sequencing, the genomic data available for the characterization of integration sites (IS has dramatically increased. At present, in a single experiment, several thousand viral integration genome targets can be investigated to define genomic hot spots. In a previous article, we renovated a formal CIS analysis based on a rigid fixed window demarcation into a more stretchy definition grounded on graphs. Here, we present a selection of supporting data related to the graph-based framework (GBF from our previous article, in which a collection of common integration sites (CIS was identified on six published datasets. In this work, we will focus on two datasets, ISRTCGD and ISHIV, which have been previously discussed. Moreover, we show in more detail the workflow design that originates the datasets.

  2. Genomics dataset on unclassified published organism (patent US 7547531

    Directory of Open Access Journals (Sweden)

    Mohammad Mahfuz Ali Khan Shawan

    2016-12-01

    Full Text Available Nucleotide (DNA sequence analysis provides important clues regarding the characteristics and taxonomic position of an organism. With the intention that, DNA sequence analysis is very crucial to learn about hierarchical classification of that particular organism. This dataset (patent US 7547531 is chosen to simplify all the complex raw data buried in undisclosed DNA sequences which help to open doors for new collaborations. In this data, a total of 48 unidentified DNA sequences from patent US 7547531 were selected and their complete sequences were retrieved from NCBI BioSample database. Quick response (QR code of those DNA sequences was constructed by DNA BarID tool. QR code is useful for the identification and comparison of isolates with other organisms. AT/GC content of the DNA sequences was determined using ENDMEMO GC Content Calculator, which indicates their stability at different temperature. The highest GC content was observed in GP445188 (62.5% which was followed by GP445198 (61.8% and GP445189 (59.44%, while lowest was in GP445178 (24.39%. In addition, New England BioLabs (NEB database was used to identify cleavage code indicating the 5, 3 and blunt end and enzyme code indicating the methylation site of the DNA sequences was also shown. These data will be helpful for the construction of the organisms’ hierarchical classification, determination of their phylogenetic and taxonomic position and revelation of their molecular characteristics.

  3. Publishing datasets with eSciDoc and panMetaDocs

    Science.gov (United States)

    Ulbricht, D.; Klump, J.; Bertelmann, R.

    2012-04-01

    Currently serveral research institutions worldwide undertake considerable efforts to have their scientific datasets published and to syndicate them to data portals as extensively described objects identified by a persistent identifier. This is done to foster the reuse of data, to make scientific work more transparent, and to create a citable entity that can be referenced unambigously in written publications. GFZ Potsdam established a publishing workflow for file based research datasets. Key software components are an eSciDoc infrastructure [1] and multiple instances of the data curation tool panMetaDocs [2]. The eSciDoc repository holds data objects and their associated metadata in container objects, called eSciDoc items. A key metadata element in this context is the publication status of the referenced data set. PanMetaDocs, which is based on PanMetaWorks [3], is a PHP based web application that allows to describe data with any XML-based metadata schema. The metadata fields can be filled with static or dynamic content to reduce the number of fields that require manual entries to a minimum and make use of contextual information in a project setting. Access rights can be applied to set visibility of datasets to other project members and allow collaboration on and notifying about datasets (RSS) and interaction with the internal messaging system, that was inherited from panMetaWorks. When a dataset is to be published, panMetaDocs allows to change the publication status of the eSciDoc item from status "private" to "submitted" and prepare the dataset for verification by an external reviewer. After quality checks, the item publication status can be changed to "published". This makes the data and metadata available through the internet worldwide. PanMetaDocs is developed as an eSciDoc application. It is an easy to use graphical user interface to eSciDoc items, their data and metadata. It is also an application supporting a DOI publication agent during the process of

  4. Availability of nuclear decay data in electronic form, including beta spectra not previously published

    International Nuclear Information System (INIS)

    Eckerman, K.F.; Westfall, R.J.; Ryman, J.C.; Cristy, M.

    1994-01-01

    The unabridged data used in preparing ICRP Publication 38 (1983) and a monograph of the Medical Internal Radiation Dose (MIRD) Committee are now available in electronic form. The open-quotes ICRP38 collectionclose quotes contains data on the energies and intensities of radiations emitted by 825 radionuclides (those in ICRP Publication 38 plus 13 from the MIRD monograph), and the open-quotes MIRD collectionclose quotes contains data on 242 radionuclides. Each collection consists of a radiations data file and a beta spectra data file. The radiations data file contains the complete listing of the emitted radiations, their types, mean or unique energies, and absolute intensities for each radionuclide, the probability that a beta particle will be emitted with kinetic energies defined by a standard energy grid. Although summary information from the radiation data files has been published, neither the unabridged data nor the beta spectra have been published. These data files and a data extraction utility, which runs on a personal computer, are available from the Radiation Shielding Information Center at Oak Ridge National Laboratory. 13 refs., 1 fig., 6 tabs

  5. List of new names and new combinations previously effectively, but not validly, published.

    Science.gov (United States)

    2008-09-01

    The purpose of this announcement is to effect the valid publication of the following effectively published new names and new combinations under the procedure described in the Bacteriological Code (1990 Revision). Authors and other individuals wishing to have new names and/or combinations included in future lists should send three copies of the pertinent reprint or photocopies thereof, or an electronic copy of the published paper, to the IJSEM Editorial Office for confirmation that all of the other requirements for valid publication have been met. It is also a requirement of IJSEM and the ICSP that authors of new species, new subspecies and new combinations provide evidence that types are deposited in two recognized culture collections in two different countries (i.e. documents certifying deposition and availability of type strains). It should be noted that the date of valid publication of these new names and combinations is the date of publication of this list, not the date of the original publication of the names and combinations. The authors of the new names and combinations are as given below, and these authors' names will be included in the author index of the present issue and in the volume author index. Inclusion of a name on these lists validates the publication of the name and thereby makes it available in bacteriological nomenclature. The inclusion of a name on this list is not to be construed as taxonomic acceptance of the taxon to which the name is applied. Indeed, some of these names may, in time, be shown to be synonyms, or the organisms may be transferred to another genus, thus necessitating the creation of a new combination.

  6. Reproducibility discrepancies following reanalysis of raw data for a previously published study on diisononyl phthalate (DINP in rats

    Directory of Open Access Journals (Sweden)

    Min Chen

    2017-08-01

    Full Text Available A 2011 publication by Boberg et al. entitled “Reproductive and behavioral effects of diisononyl phthalate (DINP in perinatally exposed rats” [1] reported statistically significant changes in sperm parameters, testicular histopathology, anogenital distance and retained nipples in developing males. Using the statistical methods as reported by Boberg et al. (2011 [1], we reanalyzed the publically available raw data ([dataset] US EPA (United States Environmental Protection Agency, 2016 [2]. The output of our reanalysis and the discordances with the data as published in Boberg et al. (2011 [1] are highlighted herein. Further discussion of the basis for the replication discordances and the insufficiency of the Boberg et al. (2011 [1] response to address them can be found in a companion letter of correspondence (doi: 10.1016/j.reprotox.2017.03.013.; (Morfeld et al., 2011 [3].

  7. Publishing descriptions of non-public clinical datasets: proposed guidance for researchers, repositories, editors and funding organisations.

    Science.gov (United States)

    Hrynaszkiewicz, Iain; Khodiyar, Varsha; Hufton, Andrew L; Sansone, Susanna-Assunta

    2016-01-01

    Sharing of experimental clinical research data usually happens between individuals or research groups rather than via public repositories, in part due to the need to protect research participant privacy. This approach to data sharing makes it difficult to connect journal articles with their underlying datasets and is often insufficient for ensuring access to data in the long term. Voluntary data sharing services such as the Yale Open Data Access (YODA) and Clinical Study Data Request (CSDR) projects have increased accessibility to clinical datasets for secondary uses while protecting patient privacy and the legitimacy of secondary analyses but these resources are generally disconnected from journal articles-where researchers typically search for reliable information to inform future research. New scholarly journal and article types dedicated to increasing accessibility of research data have emerged in recent years and, in general, journals are developing stronger links with data repositories. There is a need for increased collaboration between journals, data repositories, researchers, funders, and voluntary data sharing services to increase the visibility and reliability of clinical research. Using the journal Scientific Data as a case study, we propose and show examples of changes to the format and peer-review process for journal articles to more robustly link them to data that are only available on request. We also propose additional features for data repositories to better accommodate non-public clinical datasets, including Data Use Agreements (DUAs).

  8. Demonstrating the value of publishing open data by linking DOI-based citations of source datasets to uses in research and policy

    Science.gov (United States)

    Copas, K.; Legind, J. K.; Hahn, A.; Braak, K.; Høftt, M.; Noesgaard, D.; Robertson, T.; Méndez Hernández, F.; Schigel, D.; Ko, C.

    2017-12-01

    GBIF—the Global Biodiversity Information Facility—has recently demonstrated a system that tracks publications back to individual datasets, giving data providers demonstrable evidence of the benefit and utility of sharing data to support an array of scholarly topics and practical applications. GBIF is an open-data network and research infrastructure funded by the world's governments. Its community consists of more than 90 formal participants and almost 1,000 data-publishing institutions, which currently make tens of thousands of datasets containing nearly 800 million species occurrence records freely and publicly available for discovery, use and reuse across a wide range of biodiversity-related research and policy investigations. Starting in 2015 with the help of DataONE, GBIF introduced DOIs as persistent identifiers for the datasets shared through its network. This enhancement soon extended to the assignment of DOIs to user downloads from GBIF.org, which typically filter the available records with a variety of taxonomic, geographic, temporal and other search terms. Despite the lack of widely accepted standards for citing data among researchers and publications, this technical infrastructure is beginning to take hold and support open, transparent, persistent and repeatable use and reuse of species occurrence data. These `download DOIs' provide canonical references for the search results researchers process and use in peer-reviewed articles—a practice GBIF encourages by confirming new DOIs with each download and offering guidelines on citation. GBIF has recently started linking these citation results back to dataset and publisher pages, offering more consistent, traceable evidence of the value of sharing data to support others' research. GBIF's experience may be a useful model for other repositories to follow.

  9. Road Bridges and Culverts, Bridge dataset only includes bridges maintained by Johnson County Public Works in the unincorporated areas, Published in Not Provided, Johnson County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road Bridges and Culverts dataset current as of unknown. Bridge dataset only includes bridges maintained by Johnson County Public Works in the unincorporated areas.

  10. Is email a reliable means of contacting authors of previously published papers? A study of the Emergency Medicine Journal for 2001.

    Science.gov (United States)

    O'Leary, F

    2003-07-01

    To determine whether it is possible to contact authors of previously published papers via email. A cross sectional study of the Emergency Medicine Journal for 2001. 118 articles were included in the study. The response rate from those with valid email addresses was 73%. There was no statistical difference between the type of email address used and the address being invalid (p=0.392) or between the type of article and the likelihood of a reply (p=0.197). More responses were obtained from work addresses when compared with Hotmail addresses (86% v 57%, p=0.02). Email is a valid means of contacting authors of previously published articles, particularly within the emergency medicine specialty. A work based email address may be a more valid means of contact than a Hotmail address.

  11. Compilation of new and previously published geochemical and modal data for Mesoproterozoic igneous rocks of the St. Francois Mountains, southeast Missouri

    Science.gov (United States)

    du Bray, Edward A.; Day, Warren C.; Meighan, Corey J.

    2018-04-16

    The purpose of this report is to present recently acquired as well as previously published geochemical and modal petrographic data for igneous rocks in the St. Francois Mountains, southeast Missouri, as part of an ongoing effort to understand the regional geology and ore deposits of the Mesoproterozoic basement rocks of southeast Missouri, USA. The report includes geochemical data that is (1) newly acquired by the U.S. Geological Survey and (2) compiled from numerous sources published during the last fifty-five years. These data are required for ongoing petrogenetic investigations of these rocks. Voluminous Mesoproterozoic igneous rocks in the St. Francois Mountains of southeast Missouri constitute the basement buried beneath Paleozoic sedimentary rock that is over 600 meters thick in places. The Mesoproterozoic rocks of southeast Missouri represent a significant component of approximately 1.4 billion-year-old (Ga) igneous rocks that crop out extensively in North America along the southeast margin of Laurentia and subsequent researchers suggested that iron oxide-copper deposits in the St. Francois Mountains are genetically associated with ca. 1.4 Ga magmatism in this region. The geochemical and modal data sets described herein were compiled to support investigations concerning the tectonic setting and petrologic processes responsible for the associated magmatism.

  12. A compendium of P- and S-wave velocities from surface-to-borehole logging; summary and reanalysis of previously published data and analysis of unpublished data

    Science.gov (United States)

    Boore, David M.

    2003-01-01

    For over 28 years, the U.S. Geological Survey (USGS) has been acquiring seismic velocity and geologic data at a number of locations in California, many of which were chosen because strong ground motions from earthquakes were recorded at the sites. The method for all measurements involves picking first arrivals of P- and S-waves from a surface source recorded at various depths in a borehole (as opposed to noninvasive methods, such as the SASW method [e.g., Brown et al., 2002]). The results from most of the sites are contained in a series of U.S. Geological Survey Open-File Reports (see References). Until now, none of the results have been available as computer files, and before 1992 the interpretation of the arrival times was in terms of piecemeal interval velocities, with no attempt to derive a layered model that would fit the travel times in an overall sense (the one exception is Porcella, 1984). In this report I reanalyze all of the arrival times in terms of layered models for P- and for S-wave velocities at each site, and I provide the results as computer files. In addition to the measurements reported in the open-file reports, I also include some borehole results from other reports, as well as some results never before published. I include data for 277 boreholes (at the time of this writing; more will be added to the web site as they are obtained), all in California (I have data from boreholes in Washington and Utah, but these will be published separately). I am also in the process of interpreting travel time data obtained using a seismic cone penetrometer at hundreds of sites; these data can be interpreted in the same way of those obtained from surface-to-borehole logging. When available, the data will be added to the web site (see below for information on obtaining data from the World Wide Web (WWW)). In addition to the basic borehole data and results, I provide information concerning strong-motion stations that I judge to be close enough to the boreholes

  13. Men without a sense of smell exhibit a strongly reduced number of sexual relationships, women exhibit reduced partnership security - a reanalysis of previously published data.

    Science.gov (United States)

    Croy, Ilona; Bojanowski, Viola; Hummel, Thomas

    2013-02-01

    Olfactory function influences social behavior. For instance, olfaction seems to play a key role in mate choice and helps detecting emotions in other people. In a previous study, we showed that people who were born without a sense of smell exhibit enhanced social insecurity. Based on the comments to this article we decided to have a closer look to whether the absence of the sense of smell affects men and women differently. Under this focus questionnaire data of 32 patients, diagnosed with isolated congenital anosmia (10 men, 22 women) and 36 age-matched healthy controls (15 men, 21 women) was reanalyzed. In result, men and women without a sense of smell reported enhanced social insecurity, but with different consequences: Men who were born without a sense of smell exhibit a strongly reduced number of sexual relationships and women are affected such that they feel less secure about their partner. This emphasizes the importance of the sense of smell for intimate relationships. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Contours, This Layer was derived from the USGS National Elevation Dataset (NED) based on 7.5 minute Digital Elevation Model (DEM) image files., Published in 1999, 1:24000 (1in=2000ft) scale, Atlanta Regional Commission.

    Data.gov (United States)

    NSGIC Regional | GIS Inventory — Contours dataset current as of 1999. This Layer was derived from the USGS National Elevation Dataset (NED) based on 7.5 minute Digital Elevation Model (DEM) image...

  15. Zoning Districts, The zoning districts dataset includes the towns in Manitowoc County, WI that have adopted the county's zoning ordinance., Published in 2013, 1:2400 (1in=200ft) scale, Manitowoc County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Zoning Districts dataset current as of 2013. The zoning districts dataset includes the towns in Manitowoc County, WI that have adopted the county's zoning ordinance..

  16. Airports and Airfields, The dataset provides users with information about airport locations and attributes and can be used for national and regional analysis applications., Published in 2006, 1:24000 (1in=2000ft) scale, Louisiana State University (LSU).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Airports and Airfields dataset current as of 2006. The dataset provides users with information about airport locations and attributes and can be used for national...

  17. Using the genome aggregation database, computational pathogenicity prediction tools, and patch clamp heterologous expression studies to demote previously published long QT syndrome type 1 mutations from pathogenic to benign.

    Science.gov (United States)

    Clemens, Daniel J; Lentino, Anne R; Kapplinger, Jamie D; Ye, Dan; Zhou, Wei; Tester, David J; Ackerman, Michael J

    2018-04-01

    Mutations in the KCNQ1-encoded Kv7.1 potassium channel cause long QT syndrome (LQTS) type 1 (LQT1). It has been suggested that ∼10%-20% of rare LQTS case-derived variants in the literature may have been published erroneously as LQT1-causative mutations and may be "false positives." The purpose of this study was to determine which previously published KCNQ1 case variants are likely false positives. A list of all published, case-derived KCNQ1 missense variants (MVs) was compiled. The occurrence of each MV within the Genome Aggregation Database (gnomAD) was assessed. Eight in silico tools were used to predict each variant's pathogenicity. Case-derived variants that were either (1) too frequently found in gnomAD or (2) absent in gnomAD but predicted to be pathogenic by ≤2 tools were considered potential false positives. Three of these variants were characterized functionally using whole-cell patch clamp technique. Overall, there were 244 KCNQ1 case-derived MVs. Of these, 29 (12%) were seen in ≥10 individuals in gnomAD and are demotable. However, 157 of 244 MVs (64%) were absent in gnomAD. Of these, 7 (4%) were predicted to be pathogenic by ≤2 tools, 3 of which we characterized functionally. There was no significant difference in current density between heterozygous KCNQ1-F127L, -P477L, or -L619M variant-containing channels compared to KCNQ1-WT. This study offers preliminary evidence for the demotion of 32 (13%) previously published LQT1 MVs. Of these, 29 were demoted because of their frequent sighting in gnomAD. Additionally, in silico analysis and in vitro functional studies have facilitated the demotion of 3 ultra-rare MVs (F127L, P477L, L619M). Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  18. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  19. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  20. Dataset associated with the paper Nanoscale correlation of iron biochemistry with amyloid plaque morphology in Alzheimer’s disease transgenic mouse cortex" to be published in "Cell Chemical Biology"

    OpenAIRE

    Telling, ND; Everett, J; Collingwood, JF; Dobson, J; van der Laan, G; Gallagher, JJ; Wang, J; Hitchcock, AP

    2017-01-01

    This dataset is composed of images used to construct figures in the paper, as well as text files containing the spectral data plotted in these figures. In addition, images and plots showing the cross-correlation data used to determine the correlation co-efficients are included.

  1. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  2. A Dataset of Aerial Survey Counts of Harbor Seals in Iliamna Lake, Alaska: 1984-2013

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset provides counts of harbor seals from aerial surveys over Iliamna Lake, Alaska, USA. The data have been collated from three previously published sources...

  3. Desktop Publishing.

    Science.gov (United States)

    Stanley, Milt

    1986-01-01

    Defines desktop publishing, describes microcomputer developments and software tools that make it possible, and discusses its use as an instructional tool to improve writing skills. Reasons why students' work should be published, examples of what to publish, and types of software and hardware to facilitate publishing are reviewed. (MBR)

  4. Music publishing

    OpenAIRE

    Simões, Alberto; Almeida, J. J.

    2003-01-01

    Current music publishing in the Internet is mainly concerned with sound publishing. We claim that music publishing is not only to make sound available but also to define relations between a set of music objects like music scores, guitar chords, lyrics and their meta-data. We want an easy way to publish music in the Internet, to make high quality paper booklets and even to create Audio CD's. In this document we present a workbench for music publishing based on open formats, using open-source t...

  5. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  6. Published journal article with data

    Data.gov (United States)

    U.S. Environmental Protection Agency — published journal article. This dataset is associated with the following publication: Schumacher, B., J. Zimmerman, J. Elliot, and G. Swanson. The Effect of...

  7. Publisher Correction

    DEFF Research Database (Denmark)

    Bonàs-Guarch, Sílvia; Guindo-Martínez, Marta; Miguel-Escalada, Irene

    2018-01-01

    In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program in Computatio......In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program...

  8. Publisher Correction

    DEFF Research Database (Denmark)

    Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan

    2018-01-01

    The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...

  9. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the version of this article originally published, one of the two authors with the name Wei Zhao was omitted from the author list and the affiliations for both authors were assigned to the single Wei Zhao in the author list. In addition, the ORCID for Wei Zhao (Department of Biostatistics and E...

  10. Dear Publisher.

    Science.gov (United States)

    Chelton, Mary K.

    1992-01-01

    Addresses issues that concern the relationship between publishers and librarians, including differences between libraries and bookstores; necessary information for advertisements; out-of-stock designations and their effect on budgets; the role of distributors and vendors; direct mail for book promotions; unsolicited review copies; communications…

  11. Electronic Publishing.

    Science.gov (United States)

    Lancaster, F. W.

    1989-01-01

    Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…

  12. Measurement errors in polymerase chain reaction are a confounding factor for a correct interpretation of 5-HTTLPR polymorphism effects on lifelong premature ejaculation: a critical analysis of a previously published meta-analysis of six studies.

    Science.gov (United States)

    Janssen, Paddy K C; Olivier, Berend; Zwinderman, Aeilko H; Waldinger, Marcel D

    2014-01-01

    To analyze a recently published meta-analysis of six studies on 5-HTTLPR polymorphism and lifelong premature ejaculation (PE). Calculation of fraction observed and expected genotype frequencies and Hardy Weinberg equilibrium (HWE) of cases and controls. LL,SL and SS genotype frequencies of patients were subtracted from genotype frequencies of an ideal population (LL25%, SL50%, SS25%, p = 1 for HWE). Analysis of PCRs of six studies and re-analysis of the analysis and Odds ratios (ORs) reported in the recently published meta-analysis. Three studies deviated from HWE in patients and one study deviated from HWE in controls. In three studies in-HWE the mean deviation of genotype frequencies from a theoretical population not-deviating from HWE was small: LL(1.7%), SL(-2.3%), SS(0.6%). In three studies not-in-HWE the mean deviation of genotype frequencies was high: LL(-3.3%), SL(-18.5%) and SS(21.8%) with very low percentage SL genotype concurrent with very high percentage SS genotype. The most serious PCR deviations were reported in the three not-in-HWE studies. The three in-HWE studies had normal OR. In contrast, the three not-in-HWE studies had a low OR. In three studies not-in-HWE and with very low OR, inadequate PCR analysis and/or inadequate interpretation of its gel electrophoresis resulted in very low SL and a resulting shift to very high SS genotype frequency outcome. Consequently, PCRs of these three studies are not reliable. Failure to note the inadequacy of PCR tests makes such PCRs a confounding factor in clinical interpretation of genetic studies. Currently, a meta-analysis can only be performed on three studies-in-HWE. However, based on the three studies-in-HWE with OR of about 1 there is not any indication that in men with lifelong PE the frequency of LL,SL and SS genotype deviates from the general male population and/or that the SL or SS genotype is in any way associated with lifelong PE.

  13. Measurement errors in polymerase chain reaction are a confounding factor for a correct interpretation of 5-HTTLPR polymorphism effects on lifelong premature ejaculation: a critical analysis of a previously published meta-analysis of six studies.

    Directory of Open Access Journals (Sweden)

    Paddy K C Janssen

    Full Text Available OBJECTIVE: To analyze a recently published meta-analysis of six studies on 5-HTTLPR polymorphism and lifelong premature ejaculation (PE. METHODS: Calculation of fraction observed and expected genotype frequencies and Hardy Weinberg equilibrium (HWE of cases and controls. LL,SL and SS genotype frequencies of patients were subtracted from genotype frequencies of an ideal population (LL25%, SL50%, SS25%, p = 1 for HWE. Analysis of PCRs of six studies and re-analysis of the analysis and Odds ratios (ORs reported in the recently published meta-analysis. RESULTS: Three studies deviated from HWE in patients and one study deviated from HWE in controls. In three studies in-HWE the mean deviation of genotype frequencies from a theoretical population not-deviating from HWE was small: LL(1.7%, SL(-2.3%, SS(0.6%. In three studies not-in-HWE the mean deviation of genotype frequencies was high: LL(-3.3%, SL(-18.5% and SS(21.8% with very low percentage SL genotype concurrent with very high percentage SS genotype. The most serious PCR deviations were reported in the three not-in-HWE studies. The three in-HWE studies had normal OR. In contrast, the three not-in-HWE studies had a low OR. CONCLUSIONS: In three studies not-in-HWE and with very low OR, inadequate PCR analysis and/or inadequate interpretation of its gel electrophoresis resulted in very low SL and a resulting shift to very high SS genotype frequency outcome. Consequently, PCRs of these three studies are not reliable. Failure to note the inadequacy of PCR tests makes such PCRs a confounding factor in clinical interpretation of genetic studies. Currently, a meta-analysis can only be performed on three studies-in-HWE. However, based on the three studies-in-HWE with OR of about 1 there is not any indication that in men with lifelong PE the frequency of LL,SL and SS genotype deviates from the general male population and/or that the SL or SS genotype is in any way associated with lifelong PE.

  14. Data Sharing & Publishing at Nature Publishing Group

    Science.gov (United States)

    VanDecar, J. C.; Hrynaszkiewicz, I.; Hufton, A. L.

    2015-12-01

    In recent years, the research community has come to recognize that upon-request data sharing has important limitations1,2. The Nature-titled journals feel that researchers have a duty to share data without undue qualifications, in a manner that allows others to replicate and build upon their published findings. Historically, the Nature journals have been strong supporters of data deposition in communities with existing data mandates, and have required data sharing upon request in all other cases. To help address some of the limitations of upon-request data sharing, the Nature titles have strengthened their existing data policies and forged a new partnership with Scientific Data, to promote wider data sharing in discoverable, citeable and reusable forms, and to ensure that scientists get appropriate credit for sharing3. Scientific Data is a new peer-reviewed journal for descriptions of research datasets, which works with a wide of range of public data repositories4. Articles at Scientific Data may either expand on research publications at other journals or may be used to publish new datasets. The Nature Publishing Group has also signed the Joint Declaration of Data Citation Principles5, and Scientific Data is our first journal to include formal data citations. We are currently in the process of adding data citation support to our various journals. 1 Wicherts, J. M., Borsboom, D., Kats, J. & Molenaar, D. The poor availability of psychological research data for reanalysis. Am. Psychol. 61, 726-728, doi:10.1037/0003-066x.61.7.726 (2006). 2 Vines, T. H. et al. Mandated data archiving greatly improves access to research data. FASEB J. 27, 1304-1308, doi:10.1096/fj.12-218164 (2013). 3 Data-access practices strengthened. Nature 515, 312, doi:10.1038/515312a (2014). 4 More bang for your byte. Sci. Data 1, 140010, doi:10.1038/sdata.2014.10 (2014). 5 Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. (FORCE11, San Diego, CA, 2014).

  15. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  16. Transition to electronic publishing

    Science.gov (United States)

    Bowning, Sam

    Previous communications have described some of the many changes that will occur in the next few months as AGU makes the transition to fully electronic publishing. With the advent of the new AGU electronic publishing system, manuscripts will be submitted, edited, reviewed, and published in electronic formats. This piece discusses how the electronic journals will differ from the print journals. Electronic publishing will require some adjustments to the ways we currently think about journals from our perspective of standard print versions. Visiting the Web site of AGU's Geochemistry, Geophysics, Geosystems (G-Cubed) is a great way to get familiar with the look and feel of electronic publishing. However, protocols, especially for citations of articles, are still evolving. Some of the biggest changes for users of AGU publications may be the lack of page numbers, the use of a unique identifier (DOI),and changes in citation style.

  17. Libraries, The locations and contact information for academic, private and public libraries in Rhode Island. The intention of this dataset was to provide an overview of data. Additional information pertinent to the state is also available from the RI Department of, Published in 2007, 1:4800 (1in=400ft) scale, Rhode Island and Providence Plantations.

    Data.gov (United States)

    NSGIC State | GIS Inventory — Libraries dataset current as of 2007. The locations and contact information for academic, private and public libraries in Rhode Island. The intention of this dataset...

  18. Public Access Points, Location of public beach access along the Oregon Coast. Boat ramp locations were added to the dataset to allow users to view the location of boat ramps along the Columbia River and the Willamete River north of the Oregon City Dam., Published in 2005, 1:100000 (1in=8333ft) scale, Oregon Geospatial Enterprise Office (GEO).

    Data.gov (United States)

    NSGIC State | GIS Inventory — Public Access Points dataset current as of 2005. Location of public beach access along the Oregon Coast. Boat ramp locations were added to the dataset to allow users...

  19. Child Day Care Centers, This dataset contains the licensed daycare center locations in MD. Addresses were provided by the Department of Labor Licensing and Regulation (DLLR), and geocoded using Maryland Statewide Addressing Initiative Centerline., Published in 2012, 1:2400 (1in=200ft) scale, Towson University.

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Child Day Care Centers dataset current as of 2012. This dataset contains the licensed daycare center locations in MD. Addresses were provided by the Department of...

  20. Aerial Photography and Imagery, Ortho-Corrected, This dataset contains imagery of Prince George's County in RGB format. The primary goal was to acquire Countywide Digital Orthoimagery at 6" ground pixel resolution., Published in 2009, 1:1200 (1in=100ft) scale, Maryland National Capital Park and Planning Commission.

    Data.gov (United States)

    NSGIC Non-Profit | GIS Inventory — Aerial Photography and Imagery, Ortho-Corrected dataset current as of 2009. This dataset contains imagery of Prince George's County in RGB format. The primary goal...

  1. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  2. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  3. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  4. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  5. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  6. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  7. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  8. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  9. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  10. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an

  11. How meaningful are risk determinations in the absence of a complete dataset? Making the case for publishing standardized test guideline and ‘no effect’ studies for evaluating the safety of nanoparticulates versus spurious ‘high effect’ results from single investigative studies

    Science.gov (United States)

    Warheit, David B.; Donner, E. Maria

    2015-06-01

    A recent review article critically assessed the effectiveness of published research articles in nanotoxicology to meaningfully address health and safety issues for workers and consumers. The main conclusions were that, based on a number of flaws in study designs, the potential risk from exposures to nanomaterials is highly exaggerated, and that no ‘nano-specific’ adverse effects, different from exposures to bulk particles, have been convincingly demonstrated. In this brief editorial we focus on a related tangential issue which potentially compromises the integrity of basic risk science. We note that some single investigation studies report specious toxicity findings, which make the conclusions more alarming and attractive and publication worthy. In contrast, the standardized, carefully conducted, ‘guideline study results’ are often ignored because they can frequently report no adverse effects; and as a consequence are not considered as novel findings for publication purposes, and therefore they are never considered as newsworthy in the popular press. Yet it is the Organization for Economic Cooperation and Development (OECD) type test guideline studies that are the most reliable for conducting risk assessments. To contrast these styles and approaches, we present the results of a single study which reports high toxicological effects in rats following low-dose, short-term oral exposures to nanoscale titanium dioxide particles concomitant with selective investigative analyses. Alternatively, the findings of OECD test guideline 408, standardized guideline oral toxicity studies conducted for 90 days at much higher doses (1000 mg kg-1) in male and female rats demonstrated no adverse effects following a very thorough and complete clinical chemical, as well as histopathological evaluation of all of the relevant organs in the body. This discrepancy in study findings is not reconciled by the fact that several biokinetic studies in rats and humans demonstrate little or

  12. How meaningful are risk determinations in the absence of a complete dataset? Making the case for publishing standardized test guideline and ‘no effect’ studies for evaluating the safety of nanoparticulates versus spurious ‘high effect’ results from single investigative studies

    International Nuclear Information System (INIS)

    Warheit, David B; Donner, E Maria

    2015-01-01

    A recent review article critically assessed the effectiveness of published research articles in nanotoxicology to meaningfully address health and safety issues for workers and consumers. The main conclusions were that, based on a number of flaws in study designs, the potential risk from exposures to nanomaterials is highly exaggerated, and that no ‘nano-specific’ adverse effects, different from exposures to bulk particles, have been convincingly demonstrated. In this brief editorial we focus on a related tangential issue which potentially compromises the integrity of basic risk science. We note that some single investigation studies report specious toxicity findings, which make the conclusions more alarming and attractive and publication worthy. In contrast, the standardized, carefully conducted, ‘guideline study results’ are often ignored because they can frequently report no adverse effects; and as a consequence are not considered as novel findings for publication purposes, and therefore they are never considered as newsworthy in the popular press. Yet it is the Organization for Economic Cooperation and Development (OECD) type test guideline studies that are the most reliable for conducting risk assessments. To contrast these styles and approaches, we present the results of a single study which reports high toxicological effects in rats following low-dose, short-term oral exposures to nanoscale titanium dioxide particles concomitant with selective investigative analyses. Alternatively, the findings of OECD test guideline 408, standardized guideline oral toxicity studies conducted for 90 days at much higher doses (1000 mg kg −1 ) in male and female rats demonstrated no adverse effects following a very thorough and complete clinical chemical, as well as histopathological evaluation of all of the relevant organs in the body. This discrepancy in study findings is not reconciled by the fact that several biokinetic studies in rats and humans demonstrate

  13. The Lunar Source Disk: Old Lunar Datasets on a New CD-ROM

    Science.gov (United States)

    Hiesinger, H.

    1998-01-01

    A compilation of previously published datasets on CD-ROM is presented. This Lunar Source Disk is intended to be a first step in the improvement/expansion of the Lunar Consortium Disk, in order to create an "image-cube"-like data pool that can be easily accessed and might be useful for a variety of future lunar investigations. All datasets were transformed to a standard map projection that allows direct comparison of different types of information on a pixel-by pixel basis. Lunar observations have a long history and have been important to mankind for centuries, notably since the work of Plutarch and Galileo. As a consequence of centuries of lunar investigations, knowledge of the characteristics and properties of the Moon has accumulated over time. However, a side effect of this accumulation is that it has become more and more complicated for scientists to review all the datasets obtained through different techniques, to interpret them properly, to recognize their weaknesses and strengths in detail, and to combine them synoptically in geologic interpretations. Such synoptic geologic interpretations are crucial for the study of planetary bodies through remote-sensing data in order to avoid misinterpretation. In addition, many of the modem datasets, derived from Earth-based telescopes as well as from spacecraft missions, are acquired at different geometric and radiometric conditions. These differences make it challenging to compare or combine datasets directly or to extract information from different datasets on a pixel-by-pixel basis. Also, as there is no convention for the presentation of lunar datasets, different authors choose different map projections, depending on the location of the investigated areas and their personal interests. Insufficient or incomplete information on the map parameters used by different authors further complicates the reprojection of these datasets to a standard geometry. The goal of our efforts was to transfer previously published lunar

  14. Publishing and Revising Content

    Science.gov (United States)

    Editors and Webmasters can publish content without going through a workflow. Publishing times and dates can be set, and multiple pages can be published in bulk. Making an edit to published content created a revision.

  15. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  16. An Electronic Publishing Model for Academic Publishers.

    Science.gov (United States)

    Gold, Jon D.

    1994-01-01

    Describes an electronic publishing model based on Standard Generalized Markup Language (SGML) and considers its use by an academic publisher. Highlights include how SGML is used to produce an electronic book, hypertext, methods of delivery, intellectual property rights, and future possibilities. Sample documents are included. (two references) (LRW)

  17. Getting Your Textbook Published.

    Science.gov (United States)

    Irwin, Armond J.

    1982-01-01

    Points to remember in getting a textbook published are examined: book idea, publisher's sales representatives, letter of inquiry, qualifications for authorship, author information form, idea proposal, reviews, marketing and sales, publishing agreement, author royalties, and copyright assignment. (CT)

  18. Plagiarism in scientific publishing.

    Science.gov (United States)

    Masic, Izet

    2012-12-01

    Scientific publishing is the ultimate product of scientist work. Number of publications and their quoting are measures of scientist success while unpublished researches are invisible to the scientific community, and as such nonexistent. Researchers in their work rely on their predecessors, while the extent of use of one scientist work, as a source for the work of other authors is the verification of its contributions to the growth of human knowledge. If the author has published an article in a scientific journal it cannot publish the article in any other journal h with a few minor adjustments or without quoting parts of the first article, which are used in another article. Copyright infringement occurs when the author of a new article with or without the mentioning the author used substantial portions of previously published articles, including tables and figures. Scientific institutions and universities should,in accordance with the principles of Good Scientific Practice (GSP) and Good Laboratory Practices (GLP) have a center for monitoring,security, promotion and development of quality research. Establish rules and compliance to rules of good scientific practice are the obligations of each research institutions,universities and every individual-researchers,regardless of which area of science is investigated. In this way, internal quality control ensures that a research institution such as a university, assume responsibility for creating an environment that promotes standards of excellence, intellectual honesty and legality. Although the truth should be the aim of scientific research, it is not guiding fact for all scientists. The best way to reach the truth in its study and to avoid the methodological and ethical mistakes is to consistently apply scientific methods and ethical standards in research. Although variously defined plagiarism is basically intended to deceive the reader's own scientific contribution. There is no general regulation of control of

  19. PLAGIARISM IN SCIENTIFIC PUBLISHING

    Science.gov (United States)

    Masic, Izet

    2012-01-01

    Scientific publishing is the ultimate product of scientist work. Number of publications and their quoting are measures of scientist success while unpublished researches are invisible to the scientific community, and as such nonexistent. Researchers in their work rely on their predecessors, while the extent of use of one scientist work, as a source for the work of other authors is the verification of its contributions to the growth of human knowledge. If the author has published an article in a scientific journal it cannot publish the article in any other journal h with a few minor adjustments or without quoting parts of the first article, which are used in another article. Copyright infringement occurs when the author of a new article with or without the mentioning the author used substantial portions of previously published articles, including tables and figures. Scientific institutions and universities should,in accordance with the principles of Good Scientific Practice (GSP) and Good Laboratory Practices (GLP) have a center for monitoring,security, promotion and development of quality research. Establish rules and compliance to rules of good scientific practice are the obligations of each research institutions,universities and every individual-researchers,regardless of which area of science is investigated. In this way, internal quality control ensures that a research institution such as a university, assume responsibility for creating an environment that promotes standards of excellence, intellectual honesty and legality. Although the truth should be the aim of scientific research, it is not guiding fact for all scientists. The best way to reach the truth in its study and to avoid the methodological and ethical mistakes is to consistently apply scientific methods and ethical standards in research. Although variously defined plagiarism is basically intended to deceive the reader’s own scientific contribution. There is no general regulation of control of

  20. Embracing Electronic Publishing.

    Science.gov (United States)

    Wills, Gordon

    1996-01-01

    Electronic publishing is the grandest revolution in the capture and dissemination of academic and professional knowledge since Caxton developed the printing press. This article examines electronic publishing, describes different electronic publishing scenarios (authors' cooperative, consolidator/retailer/agent oligopsony, publisher oligopoly), and…

  1. PUBLISHER'S ANNOUNCEMENT: Refereeing standards

    Science.gov (United States)

    Bender, C.; Scriven, N.

    2004-08-01

    On 1 January 2004 I will be assuming the position of Editor-in-Chief of Journal of Physics A: Mathematical and General (J. Phys. A). I am flattered at the confidence expressed in my ability to carry out this challenging job and I will try hard to justify this confidence. The previous Editor-in-Chief, Ed Corrigan, has worked tirelessly for the last five years and has done an excellent job for the journal. Everyone at the journal is profoundly grateful for his leadership and for his achievements. Before accepting the position of Editor-in-Chief, I visited the office of J. Phys. A to examine the organization and to assess its strengths and weaknesses. This office is located at the Institute of Physics Publishing (IOPP) headquarters in Bristol. J. Phys. A has been expanding rapidly and now publishes at the rate of nearly 1000 articles (or about 14,000 pages) per year. The entire operation of the journal is conducted in a very small space---about 15 square metres! Working in this space are six highly intelligent, talented, hard working, and dedicated people: Neil Scriven, Publisher; Mike Williams, Publishing Editor; Rose Gray and Sarah Nadin, Publishing Administrators; Laura Smith and Steve Richards, Production Editors. In this small space every day about eight submitted manuscripts are downloaded from the computer or received in the post. These papers are then processed and catalogued, referees are selected, and the papers are sent out for evaluation. In this small space the referees' reports are received, publication decisions are made, and accepted articles are then published quickly by IOPP. The whole operation is amazingly efficient. Indeed, one of the great strengths of J. Phys. A is the speed at which papers are processed. The average time between the receipt of a manuscript and an editorial decision is under sixty days. (Many distinguished journals take three to five times this amount of time.) This speed of publication is an extremely strong enticement for

  2. Copyright of Electronic Publishing.

    Science.gov (United States)

    Dong, Elaine; Wang, Bob

    2002-01-01

    Analyzes the importance of copyright, considers the main causes of copyright infringement in electronic publishing, discusses fair use of a copyrighted work, and suggests methods to safeguard copyrighted electronic publishing, including legislation, contracts, and technology. (Author/LRW)

  3. Publishing: The Creative Business.

    Science.gov (United States)

    Bohne, Harald; Van Ierssel, Harry

    This book offers guidelines to emerging and would-be publishers, whether they plan to enter publishing as a career, a sideline, or a diversion. It stresses the business aspects of publishing and emphasizes the major housekeeping functions encountered in the business, except methods of sales and distribution. Contents include "The Mechanics of…

  4. Academic Nightmares: Predatory Publishing

    Science.gov (United States)

    Van Nuland, Sonya E.; Rogers, Kem A.

    2017-01-01

    Academic researchers who seek to publish their work are confronted daily with a barrage of e-mails from aggressive marketing campaigns that solicit them to publish their research with a specialized, often newly launched, journal. Known as predatory journals, they often promise high editorial and publishing standards, yet their exploitive business…

  5. Desktop Publishing Made Simple.

    Science.gov (United States)

    Wentling, Rose Mary

    1989-01-01

    The author discusses the types of computer hardware and software necessary to set up a desktop publishing system, both for use in educational administration and for instructional purposes. Classroom applications of desktop publishing are presented. The author also provides guidelines for preparing to teach desktop publishing. (CH)

  6. Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    2011-03-01

    Full Text Available Meta-analyses play an important role in synthesizing evidence from diverse studies and datasets that address similar questions. A major obstacle for meta-analyses arises from biases in reporting. In particular, it is speculated that findings which do not achieve formal statistical significance are less likely reported than statistically significant findings. Moreover, the patterns of bias can be complex and may also depend on the timing of the research results and their relationship with previously published work. In this paper, we present an approach that is specifically designed to analyze large-scale datasets on published results. Such datasets are currently emerging in diverse research fields, particularly in molecular medicine. We use our approach to investigate a dataset on Alzheimer's disease (AD that covers 1167 results from case-control studies on 102 genetic markers. We observe that initial studies on a genetic marker tend to be substantially more biased than subsequent replications. The chances for initial, statistically non-significant results to be published are estimated to be about 44% (95% CI, 32% to 63% relative to statistically significant results, while statistically non-significant replications have almost the same chance to be published as statistically significant replications (84%; 95% CI, 66% to 107%. Early replications tend to be biased against initial findings, an observation previously termed Proteus phenomenon: The chances for non-significant studies going in the same direction as the initial result are estimated to be lower than the chances for non-significant studies opposing the initial result (73%; 95% CI, 55% to 96%. Such dynamic patterns in bias are difficult to capture by conventional methods, where typically simple publication bias is assumed to operate. Our approach captures and corrects for complex dynamic patterns of bias, and thereby helps generating conclusions from published results that are more robust

  7. The Role of Datasets on Scientific Influence within Conflict Research

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C.; Moates, Shiloh; Carley, Kathleen M.

    2016-01-01

    We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving “conflict” in the Web of Science (WoS) over a 66-year period (1945–2011). We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA), a specialized social network analysis on this citation network (~1.5 million works), to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed—such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957–1971 where ideas didn’t persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993). The critical path consisted of a number of key features: 1) Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2) Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3) We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography). Publically available conflict datasets developed early on helped

  8. The Role of Datasets on Scientific Influence within Conflict Research.

    Directory of Open Access Journals (Sweden)

    Tracy Van Holt

    Full Text Available We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS over a 66-year period (1945-2011. We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA, a specialized social network analysis on this citation network (~1.5 million works, to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993. The critical path consisted of a number of key features: 1 Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2 Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3 We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography. Publically available conflict datasets developed early on helped

  9. The Role of Datasets on Scientific Influence within Conflict Research.

    Science.gov (United States)

    Van Holt, Tracy; Johnson, Jeffery C; Moates, Shiloh; Carley, Kathleen M

    2016-01-01

    We inductively tested if a coherent field of inquiry in human conflict research emerged in an analysis of published research involving "conflict" in the Web of Science (WoS) over a 66-year period (1945-2011). We created a citation network that linked the 62,504 WoS records and their cited literature. We performed a critical path analysis (CPA), a specialized social network analysis on this citation network (~1.5 million works), to highlight the main contributions in conflict research and to test if research on conflict has in fact evolved to represent a coherent field of inquiry. Out of this vast dataset, 49 academic works were highlighted by the CPA suggesting a coherent field of inquiry; which means that researchers in the field acknowledge seminal contributions and share a common knowledge base. Other conflict concepts that were also analyzed-such as interpersonal conflict or conflict among pharmaceuticals, for example, did not form their own CP. A single path formed, meaning that there was a cohesive set of ideas that built upon previous research. This is in contrast to a main path analysis of conflict from 1957-1971 where ideas didn't persist in that multiple paths existed and died or emerged reflecting lack of scientific coherence (Carley, Hummon, and Harty, 1993). The critical path consisted of a number of key features: 1) Concepts that built throughout include the notion that resource availability drives conflict, which emerged in the 1960s-1990s and continued on until 2011. More recent intrastate studies that focused on inequalities emerged from interstate studies on the democracy of peace earlier on the path. 2) Recent research on the path focused on forecasting conflict, which depends on well-developed metrics and theories to model. 3) We used keyword analysis to independently show how the CP was topically linked (i.e., through democracy, modeling, resources, and geography). Publically available conflict datasets developed early on helped shape the

  10. National Elevation Dataset

    Science.gov (United States)

    ,

    2002-01-01

    The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.

  11. Publishing studies: what else?

    Directory of Open Access Journals (Sweden)

    Bertrand Legendre

    2015-07-01

    Full Text Available This paper intends to reposition “publishing studies” in the long process that goes from the beginning of book history to the current research on cultural industries. It raises questions about interdisciplinarity and the possibility of considering publishing independently of other sectors of the media and cultural offerings. Publishing is now included in a large range of industries and, at the same time, analyses tend to become more and more segmented according to production sectors and scientific fields. In addition to the problems created, from the professional point of view, by this double movement, this one requires a questioning of the concept of “publishing studies”.

  12. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  13. Publisher Correction to

    NARCIS (Netherlands)

    Barrio, Isabel C.; Lindén, Elin; Beest, Te Mariska; Olofsson, Johan; Rocha, Adrian; Soininen, Eeva M.; Alatalo, Juha M.; Andersson, Tommi; Asmus, Ashley; Boike, Julia; Bråthen, Kari Anne; Bryant, John P.; Buchwal, Agata; Bueno, C.G.; Christie, Katherine S.; Egelkraut, Dagmar; Ehrich, Dorothee; Fishback, Lee Ann; Forbes, Bruce C.; Gartzia, Maite; Grogan, Paul; Hallinger, Martin; Heijmans, Monique M.P.D.; Hik, David S.; Hofgaard, Annika; Holmgren, Milena; Høye, Toke T.; Huebner, Diane C.; Jónsdóttir, Ingibjörg Svala; Kaarlejärvi, Elina; Kumpula, Timo; Lange, Cynthia Y.M.J.G.; Lange, Jelena; Lévesque, Esther; Limpens, Juul; Macias-Fauria, Marc; Myers-Smith, Isla; Nieukerken, van Erik J.; Normand, Signe; Post, Eric S.; Schmidt, Niels Martin; Sitters, Judith; Skoracka, Anna; Sokolov, Alexander; Sokolova, Natalya; Speed, James D.M.; Street, Lorna E.; Sundqvist, Maja K.; Suominen, Otso; Tananaev, Nikita; Tremblay, Jean Pierre; Urbanowicz, Christine; Uvarov, Sergey A.; Watts, David; Wilmking, Martin; Wookey, Philip A.; Zimmermann, Heike H.; Zverev, Vitali; Kozlov, Mikhail V.

    2018-01-01

    The above mentioned article was originally scheduled for publication in the special issue on Ecology of Tundra Arthropods with guest editors Toke T. Høye . Lauren E. Culler. Erroneously, the article was published in Polar Biology, Volume 40, Issue 11, November, 2017. The publisher sincerely

  14. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0258-252X. AJOL African Journals Online.

  15. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1596-6798. AJOL African Journals Online.

  16. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1115-2613. AJOL African Journals Online.

  17. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0047-651X. AJOL African Journals Online.

  18. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0856-7212. AJOL African Journals Online.

  19. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0378-4738. AJOL African Journals Online.

  20. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0254-2765. AJOL African Journals Online.

  1. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0850-3907. AJOL African Journals Online.

  2. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2141-8322. AJOL African Journals Online.

  3. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0794-7410. AJOL African Journals Online.

  4. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2078-6778. AJOL African Journals Online.

  5. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2305-8862. AJOL African Journals Online.

  6. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1596-9819. AJOL African Journals Online.

  7. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0379-4350. AJOL African Journals Online.

  8. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2408-8137. AJOL African Journals Online.

  9. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1029-5933. AJOL African Journals Online.

  10. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2467-8252. AJOL African Journals Online.

  11. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0376-4753. AJOL African Journals Online.

  12. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1118-1028. AJOL African Journals Online.

  13. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1597-4292. AJOL African Journals Online.

  14. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0189-9686. AJOL African Journals Online.

  15. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2360-994X. AJOL African Journals Online.

  16. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1595-1413. AJOL African Journals Online.

  17. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2078-5151. AJOL African Journals Online.

  18. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1694-0423. AJOL African Journals Online.

  19. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0855-4307. AJOL African Journals Online.

  20. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1596-9827. AJOL African Journals Online.

  1. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0379-9069. AJOL African Journals Online.

  2. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1998-1279. AJOL African Journals Online.

  3. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1606-7479. AJOL African Journals Online.

  4. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1995-7262. AJOL African Journals Online.

  5. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0856-8960. AJOL African Journals Online.

  6. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0855-5591. AJOL African Journals Online.

  7. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1531-4065. AJOL African Journals Online.

  8. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1110-5607. AJOL African Journals Online.

  9. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2076-7714. AJOL African Journals Online.

  10. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1858-554X. AJOL African Journals Online.

  11. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1994-8220. AJOL African Journals Online.

  12. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1596-6232. AJOL African Journals Online.

  13. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2224-0020. AJOL African Journals Online.

  14. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0556-8641. AJOL African Journals Online.

  15. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1596-5414. AJOL African Journals Online.

  16. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2305-2678. AJOL African Journals Online.

  17. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1119-3077. AJOL African Journals Online.

  18. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2078-676X. AJOL African Journals Online.

  19. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1027-4332. AJOL African Journals Online.

  20. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1814-232X. AJOL African Journals Online.

  1. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1998-9881. AJOL African Journals Online.

  2. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0303-691X. AJOL African Journals Online.

  3. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0376-8902. AJOL African Journals Online.

  4. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2507-7961. AJOL African Journals Online.

  5. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0189-5117. AJOL African Journals Online.

  6. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1012-2796. AJOL African Journals Online.

  7. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2313-1799. AJOL African Journals Online.

  8. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1025-9848. AJOL African Journals Online.

  9. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2449-108X. AJOL African Journals Online.

  10. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2141-9884. AJOL African Journals Online.

  11. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1727-3781. AJOL African Journals Online.

  12. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2090-7214. AJOL African Journals Online.

  13. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2410-8936. AJOL African Journals Online.

  14. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0856-0714. AJOL African Journals Online.

  15. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1684-5374. AJOL African Journals Online.

  16. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1998-8125. AJOL African Journals Online.

  17. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1016-0728. AJOL African Journals Online.

  18. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1821-9241. AJOL African Journals Online.

  19. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1607-0011. AJOL African Journals Online.

  20. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. AJOL African Journals Online. HOW TO USE ...

  1. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2591 6831. AJOL African Journals Online.

  2. Desktop Publishing for Counselors.

    Science.gov (United States)

    Lucking, Robert; Mitchum, Nancy

    1990-01-01

    Discusses the fundamentals of desktop publishing for counselors, including hardware and software systems and peripherals. Notes by using desktop publishing, counselors can produce their own high-quality documents without the expense of commercial printers. Concludes computers present a way of streamlining the communications of a counseling…

  3. Publishing: Alternatives and Economics.

    Science.gov (United States)

    Penchansky, Mimi; And Others

    The Library Association of the City University of New York presents an annotated bibliography on the subject of small and alternative publishing. In the first section directories, indexes, catalogs, and reviews are briefly described. Book distributors for small publishers are listed next. The major portion of the bibliography is a listing of books…

  4. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1999-7671. AJOL African Journals Online.

  5. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1110-6859. AJOL African Journals Online.

  6. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0794-4721. AJOL African Journals Online.

  7. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2520–7997. AJOL African Journals Online.

  8. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 2072-6589. AJOL African Journals Online.

  9. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 0012-835X. AJOL African Journals Online.

  10. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1680-6905. AJOL African Journals Online.

  11. About this Publishing System

    African Journals Online (AJOL)

    This journal uses Open Journal Systems 2.4.3.0, which is open source journal management and publishing software developed, supported, and freely distributed by the Public Knowledge Project under the GNU General Public License. OJS Editorial and Publishing Process. ISSN: 1821-8148. AJOL African Journals Online.

  12. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  13. The Academic Publishing Industry

    DEFF Research Database (Denmark)

    Nell, Phillip Christopher; Wenzel, Tim Ole; Schmidt, Florian

    2014-01-01

    The case starts with introducing the outstanding profitability of academic journal publishers such as Elsevier and then dives into describing the research process from an idea to conducting research and to publishing the results in academic journals. Subsequently, demand and supply for scientific...... journals and papers are discussed including drivers and involved parties. Furthermore, the case describes competition between suppliers, customers, and publishers. In sum, the case study features a rich description of the industry’s many unusual attributes which allows for discussing the benefits...

  14. Elearning and digital publishing

    CERN Document Server

    Ching, Hsianghoo Steve; Mc Naught, Carmel

    2006-01-01

    ""ELearning and Digital Publishing"" will occupy a unique niche in the literature accessed by library and publishing specialists, and by university teachers and planners. It examines the interfaces between the work done by four groups of university staff who have been in the past quite separate from, or only marginally related to, each other - library staff, university teachers, university policy makers, and staff who work in university publishing presses. All four groups are directly and intimately connected with the main functions of universities - the creation, management and dissemination

  15. An Automatic Matcher and Linker for Transportation Datasets

    Directory of Open Access Journals (Sweden)

    Ali Masri

    2017-01-01

    Full Text Available Multimodality requires the integration of heterogeneous transportation data to construct a broad view of the transportation network. Many new transportation services are emerging while being isolated from previously-existing networks. This leads them to publish their data sources to the web, according to linked data principles, in order to gain visibility. Our interest is to use these data to construct an extended transportation network that links these new services to existing ones. The main problems we tackle in this article fall in the categories of automatic schema matching and data interlinking. We propose an approach that uses web services as mediators to help in automatically detecting geospatial properties and mapping them between two different schemas. On the other hand, we propose a new interlinking approach that enables the user to define rich semantic links between datasets in a flexible and customizable way.

  16. Desktop Publishing in Libraries.

    Science.gov (United States)

    Cisler, Steve

    1987-01-01

    Describes the components, costs, and capabilities of several desktop publishing systems, and examines their possible impact on work patterns within organizations. The text and graphics of the article were created using various microcomputer software packages. (CLB)

  17. Sisyphus desperately seeking publisher

    Indian Academy of Sciences (India)

    Antoinette Molinié

    The editors wield their Olympian authority by making today's scientists endlessly push their weighty boulders up ... since publishing has become a highly lucrative business. ... estimate that the richest 8.4 % own 83.3 % (see Global Wealth.

  18. Issues in Electronic Publishing.

    Science.gov (United States)

    Meadow, Charles T.

    1997-01-01

    Discusses issues related to electronic publishing. Topics include writing; reading; production, distribution, and commerce; copyright and ownership of intellectual property; archival storage; technical obsolescence; control of content; equality of access; and cultural changes. (Author/LRW)

  19. The Library as Publisher.

    Science.gov (United States)

    Field, Roy

    1979-01-01

    Presents a guide to for-profit library publishing of reprints, original manuscripts, and smaller items. Discussed are creation of a publications panel to manage finances and preparation, determining prices of items, and drawing up author contracts. (SW)

  20. The Book Publishing Industry

    OpenAIRE

    Jean-Paul Simon; Giuditta de Prato

    2012-01-01

    This report offers an in-depth analysis of the major economic developments in the book publishing industry. The analysis integrates data from a statistical report published earlier as part of this project. The report is divided into 4 main parts. Chapter 1, the introduction, puts the sector into an historical perspective. Chapter 2 introduces the markets at a global and regional level; describes some of the major EU markets (France, Germany, Italy, Spain and the United Kingdom). Chapter 3 ana...

  1. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  2. Open-Access Publishing

    Directory of Open Access Journals (Sweden)

    Nedjeljko Frančula

    2013-06-01

    Full Text Available Nature, one of the most prominent scientific journals dedicated one of its issues to recent changes in scientific publishing (Vol. 495, Issue 7442, 27 March 2013. Its editors stressed that words technology and revolution are closely related when it comes to scientific publishing. In addition, the transformation of research publishing is not as much a revolution than an attrition war in which all sides are buried. The most important change they refer to is the open-access model in which an author or an institution pays in advance for publishing a paper in a journal, and the paper is then available to users on the Internet free of charge.According to preliminary results of a survey conducted among 23 000 scientists by the publisher of Nature, 45% of them believes all papers should be published in open access, but at the same time 22% of them would not allow the use of papers for commercial purposes. Attitudes toward open access vary according to scientific disciplines, leading the editors to conclude the revolution still does not suit everyone.

  3. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  4. Publishers and repositories

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    The impact of self-archiving on journals and publishers is an important topic for all those involved in scholarly communication. There is some evidence that the physics arXiv has had no impact on physics journals, while 'economic common sense' suggests that some impact is inevitable. I shall review recent studies of librarian attitudes towards repositories and journals, and place this in the context of IOP Publishing's experiences with arXiv. I shall offer some possible reasons for the mis-match between these perspectives and then discuss how IOP has linked with arXiv and experimented with OA publishing. As well as launching OA journals we have co-operated with Cornell and the arXiv on Eprintweb.org, a platform that offers new features to repository users. View Andrew Wray's biography

  5. Ethics in Scientific Publishing

    Science.gov (United States)

    Sage, Leslie J.

    2012-08-01

    We all learn in elementary school not turn in other people's writing as if it were our own (plagiarism), and in high school science labs not to fake our data. But there are many other practices in scientific publishing that are depressingly common and almost as unethical. At about the 20 percent level authors are deliberately hiding recent work -- by themselves as well as by others -- so as to enhance the apparent novelty of their most recent paper. Some people lie about the dates the data were obtained, to cover up conflicts of interest, or inappropriate use of privileged information. Others will publish the same conference proceeding in multiple volumes, or publish the same result in multiple journals with only trivial additions of data or analysis (self-plagiarism). These shady practices should be roundly condemned and stopped. I will discuss these and other unethical actions I have seen over the years, and steps editors are taking to stop them.

  6. Decentralized provenance-aware publishing with nanopublications

    Directory of Open Access Journals (Sweden)

    Tobias Kuhn

    2016-08-01

    Full Text Available Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable.

  7. Publisher Correction: Predicting unpredictability

    Science.gov (United States)

    Davis, Steven J.

    2018-06-01

    In this News & Views article originally published, the wrong graph was used for panel b of Fig. 1, and the numbers on the y axes of panels a and c were incorrect; the original and corrected Fig. 1 is shown below. This has now been corrected in all versions of the News & Views.

  8. Web Publishing Schedule

    Science.gov (United States)

    Section 207(f)(2) of the E-Gov Act requires federal agencies to develop an inventory and establish a schedule of information to be published on their Web sites, make those schedules available for public comment. To post the schedules on the web site.

  9. Hprints - Licence to publish

    DEFF Research Database (Denmark)

    Rabow, Ingegerd; Sikström, Marjatta; Drachen, Thea Marie

    2010-01-01

    realised the potential advantages for them. The universities have a role here as well as the libraries that manage the archives and support scholars in various aspects of the publishing processes. Libraries are traditionally service providers with a mission to facilitate the knowledge production...

  10. The Academic Publishing Industry

    DEFF Research Database (Denmark)

    Nell, Phillip Christopher; Wenzel, Tim Ole; Schmidt, Florian

    2014-01-01

    . The case is intended to be used as a basis for class discussion rather than to illustrate effective handling of a managerial situation. It is based on published sources, interviews, and personal experience. The authors have disguised some names and other identifying information to protect confidentiality....

  11. Desktop Publishing in Education.

    Science.gov (United States)

    Hall, Wendy; Layman, J.

    1989-01-01

    Discusses the state of desktop publishing (DTP) in education today and describes the weaknesses of the systems available for use in the classroom. Highlights include document design and layout; text composition; graphics; word processing capabilities; a comparison of commercial and educational DTP packages; and skills required for DTP. (four…

  12. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  13. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  14. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  15. Support open access publishing

    DEFF Research Database (Denmark)

    Ekstrøm, Jeannette

    2013-01-01

    Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante tidsskriftsinformati......Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante...

  16. Prepare to publish.

    Science.gov (United States)

    Price, P M

    2000-01-01

    "I couldn't possibly write an article." "I don't have anything worthwhile to write about." "I am not qualified to write for publication." Do any of these statements sound familiar? This article is intended to dispel these beliefs. You can write an article. You care for the most complex patients in the health care system so you do have something worthwhile to write about. Beside correct spelling and grammar there are no special skills, certificates or diplomas required for publishing. You are qualified to write for publication. The purpose of this article is to take the mystique out of the publication process. Each step of publishing an article will be explained, from idea formation to framing your first article. Practical examples and recommendations will be presented. The essential components of the APA format necessary for Dynamics: The Official Journal of the Canadian Association of Critical Care Nurses will be outlined and resources to assist you will be provided.

  17. Reclaiming Society Publishing

    Directory of Open Access Journals (Sweden)

    Philip E. Steinberg

    2015-07-01

    Full Text Available Learned societies have become aligned with commercial publishers, who have increasingly taken over the latter’s function as independent providers of scholarly information. Using the example of geographical societies, the advantages and disadvantages of this trend are examined. It is argued that in an era of digital publication, learned societies can offer leadership with a new model of open access that can guarantee high quality scholarly material whose publication costs are supported by society membership dues.

  18. RETRACTION: Publishers' Note

    Science.gov (United States)

    post="(Executive Editor">Graeme Watt,

    2010-06-01

    Withdrawal of the paper "Was the fine-structure constant variable over cosmological time?" by L. D. Thong, N. M. Giao, N. T. Hung and T. V. Hung (EPL, 87 (2009) 69002) This paper has been formally withdrawn on ethical grounds because the article contains extensive and repeated instances of plagiarism. EPL treats all identified evidence of plagiarism in the published articles most seriously. Such unethical behaviour will not be tolerated under any circumstance. It is unfortunate that this misconduct was not detected before going to press. My thanks to Editor colleagues from other journals for bringing this fact to my attention.

  19. Proglacial river stage, discharge, and temperature datasets from the Akuliarusiarsuup Kuua River northern tributary, Southwest Greenland, 2008–2011

    Directory of Open Access Journals (Sweden)

    A. K. Rennermalm

    2012-05-01

    Full Text Available Pressing scientific questions concerning the Greenland ice sheet's climatic sensitivity, hydrology, and contributions to current and future sea level rise require hydrological datasets to resolve. While direct observations of ice sheet meltwater losses can be obtained in terrestrial rivers draining the ice sheet and from lake levels, few such datasets exist. We present a new hydrologic dataset from previously unmonitored sites in the vicinity of Kangerlussuaq, Southwest Greenland. This dataset contains measurements of river stage and discharge for three sites along the Akuliarusiarsuup Kuua (Watson River's northern tributary, with 30 min temporal resolution between June 2008 and July 2011. Additional data of water temperature, air pressure, and lake stage are also provided. Flow velocity and depth measurements were collected at sites with incised bedrock or structurally reinforced channels to maximize data quality. However, like most proglacial rivers, high turbulence and bedload transport introduce considerable uncertainty to the derived discharge estimates. Eleven propagating error sources were quantified, and reveal that largest uncertainties are associated with flow depth observations. Mean discharge uncertainties (approximately the 68% confidence interval are two to four times larger (±19% to ±43% than previously published estimates for Greenland rivers. Despite these uncertainties, this dataset offers a rare collection of direct measurements of ice sheet runoff to the global ocean and is freely available for scientific use at http://dx.doi.org/10.1594/PANGAEA.762818.

  20. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  1. PUBLISHER'S ANNOUNCEMENT: Editorial developments

    Science.gov (United States)

    2009-01-01

    We are delighted to announce that from January 2009, Professor Murray T Batchelor of the Australian National University, Canberra will be the new Editor-in-Chief of Journal of Physics A: Mathematical and Theoretical. Murray Batchelor has been Editor of the Mathematical Physics section of the journal since 2007. Prior to this, he served as a Board Member and an Advisory Panel member for the journal. His primary area of research is the statistical mechanics of exactly solved models. He holds a joint appointment in mathematics and physics and has held visiting positions at the Universities of Leiden, Amsterdam, Oxford and Tokyo. We very much look forward to working with Murray to continue to improve the journal's quality and interest to the readership. We would like to thank our outgoing Editor-in-Chief, Professor Carl M Bender. Carl has done a magnificent job as Editor-in-Chief and has worked tirelessly to improve the journal over the last five years. Carl has been instrumental in designing and implementing strategies that have enhanced the quality of papers published and service provided by Journal of Physics A: Mathematical and Theoretical. Notably, under his tenure, we have introduced the Fast Track Communications (FTC) section to the journal. This section provides a venue for outstanding short papers that report new and timely developments in mathematical and theoretical physics and offers accelerated publication and high visibility for our authors. During the last five years, we have raised the quality threshold for acceptance in the journal and now reject over 60% of submissions. As a result, papers published in Journal of Physics A: Mathematical and Theoretical are amongst the best in the field. We have also maintained and improved on our excellent receipt-to-first-decision times, which now average less than 50 days for papers. We have recently announced another innovation; the Journal of Physics A Best Paper Prize. These prizes will honour excellent papers

  2. Why publish with AGU?

    Science.gov (United States)

    Graedel, T. E.

    The most visible activity of the American Geophysical Union is its publication of scientific journals. There are eight of these: Journal of Geophysical Research—Space Physics (JGR I), Journal of Geophysical Research—Solid Earth (JGR II), Journal of Geophysical Research—Oceans and Atmospheres (JGR III), Radio Science (RS), Water Resources Research (WRR), Geophysical Research Letters (GRL), Reviews of Geophysics and Space Physics (RGSP), and the newest, Tectonics.AGU's journals have established solid reputations for scientific excellence over the years. Reputation is not sufficient to sustain a high quality journal, however, since other factors enter into an author's decision on where to publish his or her work. In this article the characteristics of AGU's journals are compared with those of its competitors, with the aim of furnishing guidance to prospective authors and a better understanding of the value of the products to purchasers.

  3. Re-inspection of small RNA sequence datasets reveals several novel human miRNA genes.

    Directory of Open Access Journals (Sweden)

    Thomas Birkballe Hansen

    Full Text Available BACKGROUND: miRNAs are key players in gene expression regulation. To fully understand the complex nature of cellular differentiation or initiation and progression of disease, it is important to assess the expression patterns of as many miRNAs as possible. Thereby, identifying novel miRNAs is an essential prerequisite to make possible a comprehensive and coherent understanding of cellular biology. METHODOLOGY/PRINCIPAL FINDINGS: Based on two extensive, but previously published, small RNA sequence datasets from human embryonic stem cells and human embroid bodies, respectively [1], we identified 112 novel miRNA-like structures and were able to validate miRNA processing in 12 out of 17 investigated cases. Several miRNA candidates were furthermore substantiated by including additional available small RNA datasets, thereby demonstrating the power of combining datasets to identify miRNAs that otherwise may be assigned as experimental noise. CONCLUSIONS/SIGNIFICANCE: Our analysis highlights that existing datasets are not yet exhaustedly studied and continuous re-analysis of the available data is important to uncover all features of small RNA sequencing.

  4. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  5. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  6. Quality Controlling CMIP datasets at GFDL

    Science.gov (United States)

    Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.

    2017-12-01

    As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.

  7. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  8. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  9. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  10. Standardized mortality in eating disorders--a quantitative summary of previously published and new evidence

    DEFF Research Database (Denmark)

    Nielsen, Søren; Møller-Madsen, S.; Isager, Torben

    2011-01-01

    strong evidence for an increase in SMR for anorexia nervosa (AN), whereas no firm conclusions could be drawn for bulimia nervosa (BN). Bias caused by loss to follow-up was quantified and found non-negligable in some samples (possible increase in SMR from 25% to 240%). We did not find a significant effect...

  11. Standardized mortality in eating disorders--a quantitative summary of previously published and new evidence

    DEFF Research Database (Denmark)

    Nielsen, Søren; Møller-Madsen, S.; Isager, Torben

    1998-01-01

    strong evidence for an increase in SMR for anorexia nervosa (AN), whereas no firm conclusions could be drawn for bulimia nervosa (BN). Bias caused by loss to follow-up was quantified and found non-negligable in some samples (possible increase in SMR from 25% to 240%). We did not find a significant effect...

  12. Leiomyosarcoma of the Prostate: Case Report and Review of 54 Previously Published Cases

    Directory of Open Access Journals (Sweden)

    Gerasimos P. Vandoros

    2008-01-01

    Full Text Available Prostate leiomyosarcoma is an extremely rare and highly aggressive neoplasm that accounts for less than 0.1% of primary prostate malignancies. We present a patient with primary leiomyosarcoma of the prostate and review 54 cases reported in the literature to discuss the clinical, diagnostic and therapeutic aspects of this uncommon tumor. Median survival was estimated at 17 months (95% C.I. 20.7–43.7 months and the 1-, 3-, and 5-year actuarial survival rates were 68%, 34%, and 26%, respectively. The only factors predictive of long-term survival were negative surgical margins and absence of metastatic disease at presentation. A multidisciplinary approach is necessary for appropriate management of this dire entity.

  13. Strategies and guidelines for scholarly publishing of biodiversity data

    Directory of Open Access Journals (Sweden)

    Lyubomir Penev

    2017-02-01

    Full Text Available The present paper describes policies and guidelines for scholarly publishing of biodiversity and biodiversity-related data, elaborated and updated during the Framework Program 7 EU BON project, on the basis of an earlier version published on Pensoft's website in 2011. The document discusses some general concepts, including a definition of datasets, incentives to publish data and licenses for data publishing. Further, it defines and compares several routes for data publishing, namely as (1 supplementary files to research articles, which may be made available directly by the publisher, or (2 published in a specialized open data repository with a link to it from the research article, or (3 as a data paper, i.e., a specific, stand-alone publication describing a particular dataset or a collection of datasets, or (4 integrated narrative and data publishing through online import/download of data into/from manuscripts, as provided by the Biodiversity Data Journal. The paper also contains detailed instructions on how to prepare and peer review data intended for publication, listed under the Guidelines for Authors and Reviewers, respectively. Special attention is given to existing standards, protocols and tools to facilitate data publishing, such as the Integrated Publishing Toolkit of the Global Biodiversity Information Facility (GBIF IPT and the DarwinCore Archive (DwC-A. A separate section describes most leading data hosting/indexing infrastructures and repositories for biodiversity and ecological data.

  14. Discovery and Reuse of Open Datasets: An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Sara

    2016-07-01

    Full Text Available Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.

  15. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  16. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  17. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  18. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  19. Utility-preserving anonymization for health data publishing.

    Science.gov (United States)

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  20. Passive Containment DataSet

    Science.gov (United States)

    This data is for Figures 6 and 7 in the journal article. The data also includes the two EPANET input files used for the analysis described in the paper, one for the looped system and one for the block system.This dataset is associated with the following publication:Grayman, W., R. Murray , and D. Savic. Redesign of Water Distribution Systems for Passive Containment of Contamination. JOURNAL OF THE AMERICAN WATER WORKS ASSOCIATION. American Water Works Association, Denver, CO, USA, 108(7): 381-391, (2016).

  1. A Tenebrionid beetle's dataset (Coleoptera, Tenebrionidae) from Peninsula Valdés (Chubut, Argentina).

    Science.gov (United States)

    Cheli, Germán H; Flores, Gustavo E; Román, Nicolás Martínez; Podestá, Darío; Mazzanti, Renato; Miyashiro, Lidia

    2013-12-18

    The Natural Protected Area Peninsula Valdés, located in Northeastern Patagonia, is one of the largest conservation units of arid lands in Argentina. Although this area has been in the UNESCO World Heritage List since 1999, it has been continually exposed to sheep grazing and cattle farming for more than a century which have had a negative impact on the local environment. Our aim is to describe the first dataset of tenebrionid beetle species living in Peninsula Valdés and their relationship to sheep grazing. The dataset contains 118 records on 11 species and 198 adult individuals collected. Beetles were collected using pitfall traps in the two major environmental units of Peninsula Valdés, taking into account grazing intensities over a three year time frame from 2005-2007. The Data quality was enhanced following the best practices suggested in the literature during the digitalization and geo-referencing processes. Moreover, identification of specimens and current accurate spelling of scientific names were reviewed. Finally, post-validation processes using DarwinTest software were applied. Specimens have been deposited at Entomological Collection of the Centro Nacional Patagónico (CENPAT-CONICET). The dataset is part of the database of this collection and has been published on the internet through GBIF Integrated Publishing Toolkit (IPT) (http://data.gbif.org/datasets/resource/14669/). Furthermore, it is the first dataset for tenebrionid beetles of arid Patagonia available in GBIF database, and it is the first one based on a previously designed and standardized sampling to assess the interaction between these beetles and grazing in the area. The main purposes of this dataset are to ensure accessibility to data associated with Tenebrionidae specimens from Peninsula Valdés (Chubut, Argentina), also to contribute to GBIF with primary data about Patagonian tenebrionids and finally, to promote the Entomological Collection of Centro Nacional Patagónico (CENPAT

  2. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  3. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  4. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  5. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  6. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  7. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  8. Choosing the Right Desktop Publisher.

    Science.gov (United States)

    Eiser, Leslie

    1988-01-01

    Investigates the many different desktop publishing packages available today. Lists the steps to desktop publishing. Suggests which package to use with specific hardware available. Compares several packages for IBM, Mac, and Apple II based systems. (MVL)

  9. EPIC: Electronic Publishing is Cheaper.

    Science.gov (United States)

    Regier, Willis G.

    Advocates of inexpensive publishing confront a widespread complaint that there is already an overproduction of scholarship that electronic publishing will make worse. The costs of electronic publishing correlate to a clutch of choices: speeds of access, breadth and depth of content, visibility, flexibility, durability, dependability, definition of…

  10. A new bed elevation dataset for Greenland

    Directory of Open Access Journals (Sweden)

    J. L. Bamber

    2013-03-01

    Full Text Available We present a new bed elevation dataset for Greenland derived from a combination of multiple airborne ice thickness surveys undertaken between the 1970s and 2012. Around 420 000 line kilometres of airborne data were used, with roughly 70% of this having been collected since the year 2000, when the last comprehensive compilation was undertaken. The airborne data were combined with satellite-derived elevations for non-glaciated terrain to produce a consistent bed digital elevation model (DEM over the entire island including across the glaciated–ice free boundary. The DEM was extended to the continental margin with the aid of bathymetric data, primarily from a compilation for the Arctic. Ice thickness was determined where an ice shelf exists from a combination of surface elevation and radar soundings. The across-track spacing between flight lines warranted interpolation at 1 km postings for significant sectors of the ice sheet. Grids of ice surface elevation, error estimates for the DEM, ice thickness and data sampling density were also produced alongside a mask of land/ocean/grounded ice/floating ice. Errors in bed elevation range from a minimum of ±10 m to about ±300 m, as a function of distance from an observation and local topographic variability. A comparison with the compilation published in 2001 highlights the improvement in resolution afforded by the new datasets, particularly along the ice sheet margin, where ice velocity is highest and changes in ice dynamics most marked. We estimate that the volume of ice included in our land-ice mask would raise mean sea level by 7.36 m, excluding any solid earth effects that would take place during ice sheet decay.

  11. International Marketing Developing Publishing Business

    Directory of Open Access Journals (Sweden)

    Eugenijus Chlivickas

    2015-05-01

    Full Text Available Lithuanian integration in the financial Eurozone and Lithuanian publishing business development in the European Union and outside it, becomes an important problem requiring a solution. Promoting the dissemination of printed books and literacy in Lithuania and beyond, to properly introduce the achievements of Lithuania in foreign countries, it is important to ensure Lithuanian letter, educational and scientific book publishing development. The article examines the characteristics of the international marketing publishing, the world and Lithuanian state publishing houses on the basis of foreign and Lithuanian scientists theoretical insights about the instruments of international marketing opportunities, developing proposals for publishing business integration of new economic conditions.

  12. What comes first? Publishing business or publishing studies?

    Directory of Open Access Journals (Sweden)

    Josipa Selthofer

    2015-07-01

    Full Text Available The aim of this paper is to analyze and compare publishing studies, their programmes at the undergraduate and graduate levels and scholars involved in the teaching of publishing courses at the top universities around the world and in Croatia. Since traditional publishing business is rapidly changing, new skills and new jobs are involved in it. The main research question is: Can modern publishing studies produce a modern publisher? Or, is it the other way around? The hypothesis of the paper is that scholars involved in the teaching of publishing courses at the top universities around the world have a background in publishing business. So, can they prepare their students for the future and can their students gain competencies they need to compete in a confusing world of digital authors and electronic books? The research methods used were content analysis and comparison. Research sample included 36 university publishing programmes at the undergraduate and graduate level worldwide (24 MA, 12 BA. The research sample was limited mainly to the English-speaking countries. In most non-English-speaking countries, it was difficult to analyse the programme curriculum in the native language because the programme and course description did not exit. In the data gathering phase, a customized web application was used for content analysis. The application has three main sections: a list of websites to evaluate, a visual representation of the uploaded website and a list of characteristics grouped by categories for quantifying data. About twenty years ago, publishing was not considered a separate scientific branch in Croatia. Publishing studies are therefore a new phenomenon to both scholars and publishers in Croatia. To create a new, ideal publishing course, can we simply copy global trends or is it better to create something of our own?

  13. The GBIF integrated publishing toolkit: facilitating the efficient publishing of biodiversity data on the internet.

    Directory of Open Access Journals (Sweden)

    Tim Robertson

    Full Text Available The planet is experiencing an ongoing global biodiversity crisis. Measuring the magnitude and rate of change more effectively requires access to organized, easily discoverable, and digitally-formatted biodiversity data, both legacy and new, from across the globe. Assembling this coherent digital representation of biodiversity requires the integration of data that have historically been analog, dispersed, and heterogeneous. The Integrated Publishing Toolkit (IPT is a software package developed to support biodiversity dataset publication in a common format. The IPT's two primary functions are to 1 encode existing species occurrence datasets and checklists, such as records from natural history collections or observations, in the Darwin Core standard to enhance interoperability of data, and 2 publish and archive data and metadata for broad use in a Darwin Core Archive, a set of files following a standard format. Here we discuss the key need for the IPT, how it has developed in response to community input, and how it continues to evolve to streamline and enhance the interoperability, discoverability, and mobilization of new data types beyond basic Darwin Core records. We close with a discussion how IPT has impacted the biodiversity research community, how it enhances data publishing in more traditional journal venues, along with new features implemented in the latest version of the IPT, and future plans for more enhancements.

  14. THE QUALITY CRITERIA AND SELF-PUBLISHING IN SCIENTIFIC PUBLISHING

    Directory of Open Access Journals (Sweden)

    Almudena Mangas-Vega

    2015-11-01

    Full Text Available Self-publishing is a growing phenomenon in recent years. It is a process that goes beyond a simple change of leader in the publication, since it involves also a change of role of agents that were consolidated over time. A self-published work does not have to mean lack of quality, so it is important to define parameters and indicators that help its evaluation and identify who has the responsibility of those criteria. The article shows these aspects from the possibilities for cross-platform publishing and concludes with an analysis of the aspects that can be considered in assessing the quality of self-publishing.

  15. Thomas Jefferson, Page Design, and Desktop Publishing.

    Science.gov (United States)

    Hartley, James

    1991-01-01

    Discussion of page design for desktop publishing focuses on the importance of functional issues as opposed to aesthetic issues, and criticizes a previous article that stressed aesthetic issues. Topics discussed include balance, consistency in text structure, and how differences in layout affect the clarity of "The Declaration of…

  16. EEG datasets for motor imagery brain-computer interface.

    Science.gov (United States)

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  17. What Desktop Publishing Can Teach Professional Writing Students about Publishing.

    Science.gov (United States)

    Dobberstein, Michael

    1992-01-01

    Points out that desktop publishing is a metatechnology that allows professional writing students access to the production phase of publishing, giving students hands-on practice in preparing text for printing and in learning how that preparation affects the visual meaning of documents. (SR)

  18. Document Questionnaires and Datasets with DDI: A Hands-On Introduction with Colectica

    OpenAIRE

    Iverson, Jeremy; Smith, Dan

    2018-01-01

    This workshop offers a hands-on, practical approach to creating and documenting both surveys and datasets with DDI and Colectica. Participants will build and field a DDI-driven survey using their own questions or samples provided in the workshop. They will then ingest, annotate, and publish DDI dataset descriptions using the collected survey data.

  19. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2012-01-01

      Introduction The first part of the year presented an important test for the new Physics Performance and Dataset (PPD) group (cf. its mandate: http://cern.ch/go/8f77). The activity was focused on the validation of the new releases meant for the Monte Carlo (MC) production and the data-processing in 2012 (CMSSW 50X and 52X), and on the preparation of the 2012 operations. In view of the Chamonix meeting, the PPD and physics groups worked to understand the impact of the higher pile-up scenario on some of the flagship Higgs analyses to better quantify the impact of the high luminosity on the CMS physics potential. A task force is working on the optimisation of the reconstruction algorithms and on the code to cope with the performance requirements imposed by the higher event occupancy as foreseen for 2012. Concerning the preparation for the analysis of the new data, a new MC production has been prepared. The new samples, simulated at 8 TeV, are already being produced and the digitisation and recons...

  20. Pattern Analysis On Banking Dataset

    Directory of Open Access Journals (Sweden)

    Amritpal Singh

    2015-06-01

    Full Text Available Abstract Everyday refinement and development of technology has led to an increase in the competition between the Tech companies and their going out of way to crack the system andbreak down. Thus providing Data mining a strategically and security-wise important area for many business organizations including banking sector. It allows the analyzes of important information in the data warehouse and assists the banks to look for obscure patterns in a group and discover unknown relationship in the data.Banking systems needs to process ample amount of data on daily basis related to customer information their credit card details limit and collateral details transaction details risk profiles Anti Money Laundering related information trade finance data. Thousands of decisionsbased on the related data are taken in a bank daily. This paper analyzes the banking dataset in the weka environment for the detection of interesting patterns based on its applications ofcustomer acquisition customer retention management and marketing and management of risk fraudulence detections.

  1. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The PPD activities, in the first part of 2013, have been focused mostly on the final physics validation and preparation for the data reprocessing of the full 8 TeV datasets with the latest calibrations. These samples will be the basis for the preliminary results for summer 2013 but most importantly for the final publications on the 8 TeV Run 1 data. The reprocessing involves also the reconstruction of a significant fraction of “parked data” that will allow CMS to perform a whole new set of precision analyses and searches. In this way the CMSSW release 53X is becoming the legacy release for the 8 TeV Run 1 data. The regular operation activities have included taking care of the prolonged proton-proton data taking and the run with proton-lead collisions that ended in February. The DQM and Data Certification team has deployed a continuous effort to promptly certify the quality of the data. The luminosity-weighted certification efficiency (requiring all sub-detectors to be certified as usab...

  2. E-publishing and multimodalities

    OpenAIRE

    Yngve Nordkvelle

    2008-01-01

    In the literature of e-publishing there has been a consistent call from the advent of e-publishing on, until now, to explore new ways of expressing ideas through the new media. It has been claimed that the Internet opens an alley of possibilities and opportunites for publishing that will change the ways of publishing once and for all. In the area of publication of e-journals, however, the call for changes has received very modest responds.The thing is, it appears, that the conventional paper ...

  3. Desktop Publishing in the University.

    Science.gov (United States)

    Burstyn, Joan N., Ed.

    Highlighting changes in the work of people within the university, this book presents nine essays that examine the effects of desktop publishing and electronic publishing on professors and students, librarians, and those who work at university presses and in publication departments. Essays in the book are: (1) "Introduction: The Promise of Desktop…

  4. The Decision to Publish Electronically.

    Science.gov (United States)

    Craig, Gary

    1983-01-01

    Argues that decision to publish a given intellectual product "electronically" is a business decision based on customer needs, available format alternatives, current business climate, and variety of already existing factors. Publishers are most influenced by customers' acceptance of new products and their own role as intermediaries in…

  5. Publishing in Open Access Journals

    International Development Research Centre (IDRC) Digital Library (Canada)

    mbrunet

    00054.x). • An ISSN (International Standard Serial Number e.g. 1234-5678) has ... Publisher uses direct and unsolicited marketing (i.e., spamming) or advertising is obtrusive (to publish articles or serve on editorial board). • No information is ...

  6. Comics, Copyright and Academic Publishing

    Directory of Open Access Journals (Sweden)

    Ronan Deazley

    2014-05-01

    Full Text Available This article considers the extent to which UK-based academics can rely upon the copyright regime to reproduce extracts and excerpts from published comics and graphic novels without having to ask the copyright owner of those works for permission. In doing so, it invites readers to engage with a broader debate about the nature, demands and process of academic publishing.

  7. Electronic Publishing: Baseline Data 1993.

    Science.gov (United States)

    Brock, Laurie

    1993-01-01

    Provides highlights of a report describing research conducted to analyze and compare publishers' and developers' current and planned involvement in electronic publishing. Topics include acceptance of new media, licensing issues, costs and other perceived obstacles, and CD-ROMs platforms. (EAM)

  8. The Evolution of Electronic Publishing.

    Science.gov (United States)

    Lancaster, F. W.

    1995-01-01

    Discusses the evolution of electronic publishing from the early 1960s when computers were used merely to produce conventional printed products to the present move toward networked scholarly publishing. Highlights include library development, periodicals on the Internet, online journals versus paper journals, problems, and the future of…

  9. The handbook of journal publishing

    CERN Document Server

    Morris, Sally; LaFrenier, Douglas; Reich, Margaret

    2013-01-01

    The Handbook of Journal Publishing is a comprehensive reference work written by experienced professionals, covering all aspects of journal publishing, both online and in print. Journals are crucial to scholarly communication, but changes in recent years in the way journals are produced, financed, and used make this an especially turbulent and challenging time for journal publishers - and for authors, readers, and librarians. The Handbook offers a thorough guide to the journal publishing process, from editing and production through marketing, sales, and fulfilment, with chapters on management, finances, metrics, copyright, and ethical issues. It provides a wealth of practical tools, including checklists, sample documents, worked examples, alternative scenarios, and extensive lists of resources, which readers can use in their day-to-day work. Between them, the authors have been involved in every aspect of journal publishing over several decades and bring to the text their experience working for a wide range of ...

  10. An Effective Grouping Method for Privacy-Preserving Bike Sharing Data Publishing

    Directory of Open Access Journals (Sweden)

    A S M Touhidul Hasan

    2017-10-01

    Full Text Available Bike sharing programs are eco-friendly transportation systems that are widespread in smart city environments. In this paper, we study the problem of privacy-preserving bike sharing microdata publishing. Bike sharing systems collect visiting information along with user identity and make it public by removing the user identity. Even after excluding user identification, the published bike sharing dataset will not be protected against privacy disclosure risks. An adversary may arrange published datasets based on bike’s visiting information to breach a user’s privacy. In this paper, we propose a grouping based anonymization method to protect published bike sharing dataset from linking attacks. The proposed Grouping method ensures that the published bike sharing microdata will be protected from disclosure risks. Experimental results show that our approach can protect user privacy in the released datasets from disclosure risks and can keep more data utility compared with existing methods.

  11. Developments in Publishing: The Potential of Digital Publishing

    OpenAIRE

    X. Tian

    2007-01-01

    This research aims to identify issues associated with the impact of digital technology on the publishing industry with a specific focus on aspects of the sustainability of existing business models in Australia. Based on the case studies, interviews and Australian-wide online surveys, the research presents a review of the traditional business models in book publishing for investigating their effectiveness in a digital environment. It speculates on how and what should be considered for construc...

  12. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  13. How libraries use publisher metadata

    Directory of Open Access Journals (Sweden)

    Steve Shadle

    2013-11-01

    Full Text Available With the proliferation of electronic publishing, libraries are increasingly relying on publisher-supplied metadata to meet user needs for discovery in library systems. However, many publisher/content provider staff creating metadata are unaware of the end-user environment and how libraries use their metadata. This article provides an overview of the three primary discovery systems that are used by academic libraries, with examples illustrating how publisher-supplied metadata directly feeds into these systems and is used to support end-user discovery and access. Commonly seen metadata problems are discussed, with recommendations suggested. Based on a series of presentations given in Autumn 2012 to the staff of a large publisher, this article uses the University of Washington Libraries systems and services as illustrative examples. Judging by the feedback received from these presentations, publishers (specifically staff not familiar with the big picture of metadata standards work would benefit from a better understanding of the systems and services libraries provide using the data that is created and managed by publishers.

  14. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  15. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  16. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  17. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  18. From protocol to published report

    DEFF Research Database (Denmark)

    Berendt, Louise; Callréus, Torbjörn; Petersen, Lene Grejs

    2016-01-01

    and published reports of academic clinical drug trials. METHODS: A comparison was made between study protocols and their corresponding published reports. We assessed the overall consistency, which was defined as the absence of discrepancy regarding study type (categorized as either exploratory or confirmatory...... in 1999, 2001, and 2003, 95 of which fulfilled the eligibility criteria and had at least one corresponding published report reporting data on trial subjects. Overall consistency was observed in 39% of the trials (95% CI: 29 to 49%). Randomized controlled trials (RCTs) constituted 72% (95% CI: 63 to 81......%) of the sample, and 87% (95% CI: 80 to 94%) of the trials were hospital based. CONCLUSIONS: Overall consistency between protocols and their corresponding published reports was low. Motivators for the inconsistencies are unknown but do not seem restricted to economic incentives....

  19. Desktop publishing com o scribus

    OpenAIRE

    Silva, Fabrício Riff; Uchôa, Kátia Cilene Amaral

    2015-01-01

    Este artigo apresenta um breve tutorial sobre Desktop Publishing, com ênfase no software livre Scribus, através da criação de um exemplo prático que explora algumas de suas principais funcionalidades.

  20. Publisher Correction: On our bookshelf

    Science.gov (United States)

    Karouzos, Marios

    2018-03-01

    In the version of this Books and Arts originally published, the book title Spectroscopy for Amateur Astronomy was incorrect; it should have read Spectroscopy for Amateur Astronomers. This has now been corrected.

  1. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos Legran, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  2. Free Publishing Culture. Sustainable Models?

    Directory of Open Access Journals (Sweden)

    Silvia Nanclares Escudero

    2013-03-01

    Full Text Available As a result of the collective research on the possibilities for publishing production and distribution offered nowadays by the Free Culture scenario, we present here a mapping of symptoms in order to propose a transitory diagnostic of the question: Is it possible to generate an economically sustainable publishing model based on the uses and customs generated and provided by Free Culture? Data, intuitions, experiences and ideas attempt to back up our affirmative answer.

  3. THE TYPES OF PUBLISHING SLOGANS

    Directory of Open Access Journals (Sweden)

    Ryzhov Konstantin Germanovich

    2015-03-01

    Full Text Available The author of the article focuses his attention on publishing slogans which are posted on 100 present-day Russian publishing houses' official websites and have not yet been studied in the special literature. The author has developed his own classification of publishing slogans based on the results of analysis and considering the current scientific views on the classification of slogans. The examined items are classified into autonomous and text-dependent according to interrelationship with an advertising text; marketable, corporative and mixed according to a presentation subject; rational, emotional and complex depending on the method of influence upon a recipient; slogan-presentation, slogan-assurance, slogan-identifier, slogan-appraisal, slogan-appeal depending on the communicative strategy; slogans consisting of one sentence and of two or more sentences; Russian and foreign ones. The analysis of the slogans of all kinds presented in the actual material allowed the author to determine the dominant features of the Russian publishing slogan which is an autonomous sentence in relation to the advertising text. In spite of that, the slogan shows the publishing output, influences the recipient emotionally, actualizes the communicative strategy of publishing house presentation of its distinguishing features, gives assurance to the target audience and distinguishes the advertised subject among competitors.

  4. The Community Publishing Project: assisting writers to self-publish ...

    African Journals Online (AJOL)

    This article examines the need for a small project such as the Community Publishing Project in South Africa and explores its aims. The method of involving writers and community groups in the publication process is described and two completed projects are evaluated. Lessons learnt by the Centre for the Book in managing ...

  5. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  6. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  7. The NASA Subsonic Jet Particle Image Velocimetry (PIV) Dataset

    Science.gov (United States)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Many tasks in fluids engineering require prediction of turbulence of jet flows. The present document documents the single-point statistics of velocity, mean and variance, of cold and hot jet flows. The jet velocities ranged from 0.5 to 1.4 times the ambient speed of sound, and temperatures ranged from unheated to static temperature ratio 2.7. Further, the report assesses the accuracies of the data, e.g., establish uncertainties for the data. This paper covers the following five tasks: (1) Document acquisition and processing procedures used to create the particle image velocimetry (PIV) datasets. (2) Compare PIV data with hotwire and laser Doppler velocimetry (LDV) data published in the open literature. (3) Compare different datasets acquired at the same flow conditions in multiple tests to establish uncertainties. (4) Create a consensus dataset for a range of hot jet flows, including uncertainty bands. (5) Analyze this consensus dataset for self-consistency and compare jet characteristics to those of the open literature. The final objective was fulfilled by using the potential core length and the spread rate of the half-velocity radius to collapse of the mean and turbulent velocity fields over the first 20 jet diameters.

  8. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  9. Web publishing today and tomorrow

    CERN Document Server

    Lie, Hakon W

    1999-01-01

    The three lectures will give participants the grand tour of the Web as we know it today, as well as peeks into the past and the future. Many three-letter acronyms will be expanded, and an overview will be provided to see how the various specifications work together. Web publishing is the common theme throughout the lectures and in the second lecture, special emphasis will be given to data formats for publishing, including HTML, XML, MathML and SMIL. In the last lectures, automatic document manipulation and presentation will be discussed, including CSS, DOM and XTL.

  10. Steller sea lion sightings or recaptures of previously marked animals throughout their range, 1987-2014

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains information regarding the sighting and capture of previously marked Steller sea lions from 1987 to the present. Marks are seen and documented...

  11. A Course in Desktop Publishing.

    Science.gov (United States)

    Somerick, Nancy M.

    1992-01-01

    Describes "Promotional Publications," a required course for public relations majors, which teaches the basics of desktop publishing. Outlines how the course covers the preparation of publications used as communication tools in public relations, advertising, and organizations, with an emphasis upon design, layout, and technology. (MM)

  12. Improving Published Descriptions of Germplasm.

    Science.gov (United States)

    Published descriptions of new germplasm, such as in the Journal of Plant Registrations (JPR) and, prior to mid-2007, in Crop Science, are important vehicles for allowing researchers and other interested parties to learn about such germplasm and the methods used to generate them. Launched in 2007, JP...

  13. Publishing in Open Access Journals

    International Development Research Centre (IDRC) Digital Library (Canada)

    mbrunet

    While most open access journals are peer‐reviewed and high quality, there are a number of ... Publisher has a negative reputation (e.g., documented examples in Chronicle of Higher Education, ... A key part of Canada's aid program, IDRC supports research in developing countries to promote growth and development.

  14. FTP: Full-Text Publishing?

    Science.gov (United States)

    Jul, Erik

    1992-01-01

    Describes the use of file transfer protocol (FTP) on the INTERNET computer network and considers its use as an electronic publishing system. The differing electronic formats of text files are discussed; the preparation and access of documents are described; and problems are addressed, including a lack of consistency. (LRW)

  15. Library Networks and Electronic Publishing.

    Science.gov (United States)

    Olvey, Lee D.

    1995-01-01

    Provides a description of present and proposed plans and strategies of OCLC (Online Computer Library Center) and their relationship to electronic publishing. FirstSearch (end-user access to secondary information), GUIDON (electronic journals online) and FastDoc (document delivery) are emphasized. (JKP)

  16. A re-analysis of the Lake Suigetsu terrestrial radiocarbon calibration dataset

    International Nuclear Information System (INIS)

    Staff, R.A.; Bronk Ramsey, C.; Nakagawa, T.

    2010-01-01

    Lake Suigetsu, Honshu Island, Japan provides an ideal sedimentary sequence from which to derive a wholly terrestrial radiocarbon calibration curve back to the limits of radiocarbon detection (circa 60 ka bp). The presence of well-defined, annually-deposited laminae (varves) throughout the entirety of this period provides an independent, high resolution chronometer against which radiocarbon measurements of plant macrofossils from the sediment column can be directly related. However, data from the initial Lake Suigetsu project were found to diverge significantly from alternative, marine-based calibration datasets released around the same time (e.g. ). The main source of this divergence is thought to be the result of inaccuracies in the absolute age profile of the Suigetsu project, caused by both varve counting uncertainties and gaps in the sediment column of unknown duration between successively-drilled core sections. Here, a re-analysis of the previously-published Lake Suigetsu data is conducted. The most recent developments in Bayesian statistical modelling techniques (OxCal v4.1; ) are implemented to fit the Suigetsu data to the latest radiocarbon calibration datasets and thereby estimate the duration of the inter-core section gaps in the Suigetsu data. In this way, the absolute age of the Lake Suigetsu sediment profile is more accurately defined, providing significant information for both radiocarbon calibration and palaeoenvironmental reconstruction purposes.

  17. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  18. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  19. Statistical and population genetics issues of two Hungarian datasets from the aspect of DNA evidence interpretation.

    Science.gov (United States)

    Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma

    2015-11-01

    When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. A Tenebrionid beetle’s dataset (Coleoptera, Tenebrionidae) from Peninsula Valdés (Chubut, Argentina)

    Science.gov (United States)

    Cheli, Germán H.; Flores, Gustavo E.; Román, Nicolás Martínez; Podestá, Darío; Mazzanti, Renato; Miyashiro, Lidia

    2013-01-01

    Abstract The Natural Protected Area Peninsula Valdés, located in Northeastern Patagonia, is one of the largest conservation units of arid lands in Argentina. Although this area has been in the UNESCO World Heritage List since 1999, it has been continually exposed to sheep grazing and cattle farming for more than a century which have had a negative impact on the local environment. Our aim is to describe the first dataset of tenebrionid beetle species living in Peninsula Valdés and their relationship to sheep grazing. The dataset contains 118 records on 11 species and 198 adult individuals collected. Beetles were collected using pitfall traps in the two major environmental units of Peninsula Valdés, taking into account grazing intensities over a three year time frame from 2005–2007. The Data quality was enhanced following the best practices suggested in the literature during the digitalization and geo-referencing processes. Moreover, identification of specimens and current accurate spelling of scientific names were reviewed. Finally, post-validation processes using DarwinTest software were applied. Specimens have been deposited at Entomological Collection of the Centro Nacional Patagónico (CENPAT-CONICET). The dataset is part of the database of this collection and has been published on the internet through GBIF Integrated Publishing Toolkit (IPT) (http://data.gbif.org/datasets/resource/14669/). Furthermore, it is the first dataset for tenebrionid beetles of arid Patagonia available in GBIF database, and it is the first one based on a previously designed and standardized sampling to assess the interaction between these beetles and grazing in the area. The main purposes of this dataset are to ensure accessibility to data associated with Tenebrionidae specimens from Peninsula Valdés (Chubut, Argentina), also to contribute to GBIF with primary data about Patagonian tenebrionids and finally, to promote the Entomological Collection of Centro Nacional Patag

  1. A Tenebrionid beetle’s dataset (Coleoptera, Tenebrionidae from Peninsula Valdés (Chubut, Argentina

    Directory of Open Access Journals (Sweden)

    German Cheli

    2013-12-01

    Full Text Available The Natural Protected Area Peninsula Valdés, located in Northeastern Patagonia, is one of the largest conservation units of arid lands in Argentina. Although this area has been in the UNESCO World Heritage List since 1999, it has been continually exposed to sheep grazing and cattle farming for more than a century which have had a negative impact on the local environment. Our aim is to describe the first dataset of tenebrionid beetle species living in Peninsula Valdés and their relationship to sheep grazing. The dataset contains 118 records on 11 species and 198 adult individuals collected. Beetles were collected using pitfall traps in the two major environmental units of Peninsula Valdés, taking into account grazing intensities over a three year time frame from 2005–2007. The Data quality was enhanced following the best practices suggested in the literature during the digitalization and geo-referencing processes. Moreover, identification of specimens and current accurate spelling of scientific names were reviewed. Finally, post-validation processes using DarwinTest software were applied. Specimens have been deposited at Entomological Collection of the Centro Nacional Patagónico (CENPAT-CONICET. The dataset is part of the database of this collection and has been published on the internet through GBIF Integrated Publishing Toolkit (IPT (http://data.gbif.org/datasets/resource/14669/. Furthermore, it is the first dataset for tenebrionid beetles of arid Patagonia available in GBIF database, and it is the first one based on a previously designed and standardized sampling to assess the interaction between these beetles and grazing in the area. The main purposes of this dataset are to ensure accessibility to data associated with Tenebrionidae specimens from Peninsula Valdés (Chubut, Argentina, also to contribute to GBIF with primary data about Patagonian tenebrionids and finally, to promote the Entomological Collection of Centro Nacional Patag

  2. Critical appraisal of published literature

    Science.gov (United States)

    Umesh, Goneppanavar; Karippacheril, John George; Magazine, Rahul

    2016-01-01

    With a large output of medical literature coming out every year, it is impossible for readers to read every article. Critical appraisal of scientific literature is an important skill to be mastered not only by academic medical professionals but also by those involved in clinical practice. Before incorporating changes into the management of their patients, a thorough evaluation of the current or published literature is an important step in clinical practice. It is necessary for assessing the published literature for its scientific validity and generalizability to the specific patient community and reader's work environment. Simple steps have been provided by Consolidated Standard for Reporting Trial statements, Scottish Intercollegiate Guidelines Network and several other resources which if implemented may help the reader to avoid reading flawed literature and prevent the incorporation of biased or untrustworthy information into our practice. PMID:27729695

  3. Critical appraisal of published literature

    Directory of Open Access Journals (Sweden)

    Goneppanavar Umesh

    2016-01-01

    Full Text Available With a large output of medical literature coming out every year, it is impossible for readers to read every article. Critical appraisal of scientific literature is an important skill to be mastered not only by academic medical professionals but also by those involved in clinical practice. Before incorporating changes into the management of their patients, a thorough evaluation of the current or published literature is an important step in clinical practice. It is necessary for assessing the published literature for its scientific validity and generalizability to the specific patient community and reader′s work environment. Simple steps have been provided by Consolidated Standard for Reporting Trial statements, Scottish Intercollegiate Guidelines Network and several other resources which if implemented may help the reader to avoid reading flawed literature and prevent the incorporation of biased or untrustworthy information into our practice.

  4. Bibliography of published papers, 1977

    International Nuclear Information System (INIS)

    1978-01-01

    Papers published by RERF (a cooperative Japan-U.S. research organization) personnel mainly in 1977 issues of journals are listed as bibliography giving the title, authors, etc. Mostly in both Japanese and English. The total of about 50 such cover areas as follows; Variety of diseases such as cancer and cardiovascular, dosimetry, genetics, pathology, radiation effects including such as diseases, and summary reports. (Mori, K.)

  5. Publisher Correction: Eternal blood vessels

    Science.gov (United States)

    Hindson, Jordan

    2018-05-01

    This article was originally published with an incorrect reference for the original article. The reference has been amended. Please see the correct reference below. Qiu, Y. et al. Microvasculature-on-a-chip for the long-term study of endothelial barrier dysfunction and microvascular obstruction in disease. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-018-0224-z (2018)

  6. The Industrial Engineering publishing landscape

    OpenAIRE

    Claasen, Schalk

    2012-01-01

    Looking at the Industrial Engineering publishing landscape through the window of Google Search, an interesting panorama unfolds. The view that I took is actually just a peek and therefore my description of what I saw is not meant to be comprehensive. The African landscape is empty except for the South African Journal of Industrial Engineering (SAJIE). This is an extraordinary situation if compared to the South American continent where there are Industrial Engineering journals in at least ...

  7. Where is smoking research published?

    Science.gov (United States)

    Liguori, A.; Hughes, J. R.

    1996-01-01

    OBJECTIVE: To identify journals that have a focus on human nicotine/smoking research and to investigate the coverage of smoking in "high-impact" journals. DESIGN: The MEDLINE computer database was searched for English-language articles on human studies published in 1988-1992 using "nicotine", "smoking", "smoking cessation", "tobacco", or "tobacco use disorder" as focus descriptors. This search was supplemented with a similar search of the PSYCLIT computer database. Fifty-eight journals containing at least 20 nicotine/smoking articles over the five years were analysed for impact factor (IF; citations per article). RESULTS: Among the journals with the highest percentage of nicotine- or smoking-focused articles (that is, 9-39% of their articles were on nicotine/smoking), Addiction, American Journal of Public Health, Cancer Causes and Control, Health Psychology, and Preventive Medicine had the greatest IF (range = 1.3-2.6). Among the journals highest in impact factor (IF > 3), only American Journal of Epidemiology, American Review of Respiratory Disease, Journal of the National Cancer Institute, and Journal of the American Medical Association published more than 10 nicotine/smoking articles per year (3-5% of all articles). Of these, only Journal of the American Medical Association published a large number of nicotine/smoking articles (32 per year). CONCLUSIONS: Although smoking causes 20% of all mortality in developed countries, the topic is not adequately covered in high-impact journals. Most smoking research is published in low-impact journals. 




 PMID:8795857

  8. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  9. BASE MAP DATASET, CHEROKEE COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  10. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  11. Harvard Aging Brain Study : Dataset and accessibility

    NARCIS (Netherlands)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G.; Chatwal, Jasmeer P.; Papp, Kathryn V.; Amariglio, Rebecca E.; Blacker, Deborah; Rentz, Dorene M.; Johnson, Keith A.; Sperling, Reisa A.; Schultz, Aaron P.

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging.

  12. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  13. BASE MAP DATASET, EDGEFIELD COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  14. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  15. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  16. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  17. BASE MAP DATASET, INYO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  18. BASE MAP DATASET, JACKSON COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  19. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  20. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  1. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  2. BASE MAP DATASET, KINGFISHER COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  3. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  4. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  5. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  6. Open access to scientific publishing

    Directory of Open Access Journals (Sweden)

    Janne Beate Reitan

    2016-12-01

    Full Text Available Interest in open access (OA to scientific publications is steadily increasing, both in Norway and internationally. From the outset, FORMakademisk has been published as a digital journal, and it was one of the first to offer OA in Norway. We have since the beginning used Open Journal Systems (OJS as publishing software. OJS is part of the Public Knowledge Project (PKP, which was created by Canadian John Willinsky and colleagues at the Faculty of Education at the University of British Columbia in 1998. The first version of OJS came as an open source software in 2001. The programme is free for everyone to use and is part of a larger collective movement wherein knowledge is shared. When FORMakademisk started in 2008, we received much help from the journal Acta Didactic (n.d. at the University of Oslo, which had started the year before us. They had also translated the programme to Norwegian. From the start, we were able to publish in both Norwegian and English. Other journals have used FORMakademisk as a model and source of inspiration when starting or when converting from subscription-based print journals to electronic OA, including the Journal of Norwegian Media Researchers [Norsk medietidsskrift]. It is in this way that the movement around PKP works and continues to grow to provide free access to research. As the articles are OA, they are also easily accessible to non-scientists. We also emphasise that the language should be readily available, although it should maintain a high scientific quality. Often there may be two sides of the same coin. We on the editorial team are now looking forward to adopting the newly developed OJS 3 this spring, with many new features and an improved design for users, including authors, peer reviewers, editors and readers.

  7. Electronic publishing of SPE papers

    International Nuclear Information System (INIS)

    Perdue, J.M.

    1992-01-01

    This paper reports that the SPE is creating an electronic index to over 25,000 technical papers and will produce a CD-ROM as an initial product. This SPE CD-ROM Masterdisc will be available at the SPE Annual Meeting in Washington, D.C. on October 4-7, 1992. The SPE Board has appointed an Ad Hoc Committee on Electronic Publishing to coordinate and oversee this project and to recommend authoring standards for submitting SPE papers electronically in the future

  8. Open Access Publishing with Drupal

    Directory of Open Access Journals (Sweden)

    Nina McHale

    2011-10-01

    Full Text Available In January 2009, the Colorado Association of Libraries (CAL suspended publication of its print quarterly journal, Colorado Libraries, as a cost-saving measure in a time of fiscal uncertainty. Printing and mailing the journal to its 1300 members cost CAL more than $26,000 per year. Publication of the journal was placed on an indefinite hiatus until the editorial staff proposed an online, open access format a year later. The benefits to migrating to open access included: significantly lower costs; a green platform; instant availability of content; a greater level of access to users with disabilities; and a higher level of visibility of the journal and the association. The editorial staff chose Drupal, including the E-journal module, and while Drupal is notorious for its steep learning curve—which exacerbated delays to content that had been created before the publishing hiatus—the fourth electronic issue was published recently at coloradolibrariesjournal.org. This article will discuss both the benefits and challenges of transitioning to an open access model and the choice Drupal as a platform over other more established journal software options.

  9. E-publishing and multimodalities

    Directory of Open Access Journals (Sweden)

    Yngve Nordkvelle

    2008-12-01

    Full Text Available In the literature of e-publishing there has been a consistent call from the advent of e-publishing on, until now, to explore new ways of expressing ideas through the new media. It has been claimed that the Internet opens an alley of possibilities and opportunites for publishing that will change the ways of publishing once and for all. In the area of publication of e-journals, however, the call for changes has received very modest responds.The thing is, it appears, that the conventional paper journal has a solid grip on the accepted formats of publishing. In a published research paper Mayernik (2007 explaines some of the reasons for that. Although pioneers of e-publishing suggested various areas where academic publishing could be expanded on, the opportunities given are scarsely used. Mayernik outlines "Non-linearity", "Multimedia", "Multiple use", "Interactivity" and "Rapid Publication" as areas of expansion for the academic e-journal. (2007. The paper deserves a thorough reading in itself, and I will briefly quote from his conclusion: "It is likely that the traditional linear article will continue to be the prevalent format for scholarly journals, both print and electronic, for the foreseeable future, and while electronic features will garner more and more use as technology improves, they will continue to be used to supplement, and not supplant, the traditional article."This is a challenging situation. If we accept the present dominant style of presenting scientific literature, we would use our energy best in seeking a way of improving the efficiency of that communication style. The use of multimedia, non-linearity etc. would perfect the present state, but still keep the scientific article as the main template. It is very unlikely that scientific publication will substitute the scholarly article with unproven alternatives. What we face is a rather conservative style of remediation that blurs the impact of the new media, - or "transparency" if

  10. Publishing corruption discussion: predatory journalism.

    Science.gov (United States)

    Jones, James W; McCullough, Laurence B

    2014-02-01

    Dr Spock is a brilliant young vascular surgeon who is up for tenure next year. He has been warned by the chair of surgery that he needs to increase his list of publications to assure passage. He has recently had a paper reviewed by one of the top journals in his specialty, Journal X-special, with several suggestions for revision. He received an e-mail request for manuscript submission from a newly minted, open access, Journal of Vascular Disease Therapy, which promises a quick and likely favorable response for a fee. What should be done? A. Send the paper to another peer reviewed journal with the suggested revisions. B. Resubmit the paper to Journal X-special. C. Submit to the online journal as is to save time. D. Submit to the online journal and another regular journal. E. Look for another job. Copyright © 2014 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  11. The IAEA as a publisher

    International Nuclear Information System (INIS)

    1965-01-01

    One of the largest publishing enterprises in Vienna has developed in then Agency, incidental to its function of disseminating scientific information. The Agency recently completed its sixth year of scientific publication of literature dealing with the peaceful uses of atomic energy. Quite early in the history of IAEA, this work grew to considerable dimensions. In 1959 the programme consisted of two volumes in the Proceedings series, one in the Safety series, and four Technical Directories, making a total in that year of 18 000 books, in addition to those prepared for free distribution. In the following year, as Agency meetings and other activities developed, the list was much longer consisting of six volumes in the Proceedings series, two in the Safety series, two in the Technical Directory series, eight in the Review series, two in the Bibliographical series, three panel reports, one volume in the legal series and the first issue of 'Nuclear Fusion'. The total number of volumes sold was 24 000, in addition to the large number for free distribution. Thereafter, there was some difficulty in keeping up with the expanding demands, and some arrears of contract printing began to accumulate. It was therefore decided to introduce internal printing of Agency publications. The adoption of the 'cold type' method in 1962 led to considerable savings and faster production. During 1963, printing and binding equipment was installed which rendered the Agency independent of contractual services. Current policy is to print and bind internally all IAEA publications except the journal, 'Nuclear Fusion', Average annual production now consists of about twenty volumes of the proceedings of scientific meetings, six technical directories (the Directory of Nuclear Reactors has been published in its fifth edition), several bibliographies and numerous technical reports

  12. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  13. GTI: a novel algorithm for identifying outlier gene expression profiles from integrated microarray datasets.

    Directory of Open Access Journals (Sweden)

    John Patrick Mpindi

    Full Text Available BACKGROUND: Meta-analysis of gene expression microarray datasets presents significant challenges for statistical analysis. We developed and validated a new bioinformatic method for the identification of genes upregulated in subsets of samples of a given tumour type ('outlier genes', a hallmark of potential oncogenes. METHODOLOGY: A new statistical method (the gene tissue index, GTI was developed by modifying and adapting algorithms originally developed for statistical problems in economics. We compared the potential of the GTI to detect outlier genes in meta-datasets with four previously defined statistical methods, COPA, the OS statistic, the t-test and ORT, using simulated data. We demonstrated that the GTI performed equally well to existing methods in a single study simulation. Next, we evaluated the performance of the GTI in the analysis of combined Affymetrix gene expression data from several published studies covering 392 normal samples of tissue from the central nervous system, 74 astrocytomas, and 353 glioblastomas. According to the results, the GTI was better able than most of the previous methods to identify known oncogenic outlier genes. In addition, the GTI identified 29 novel outlier genes in glioblastomas, including TYMS and CDKN2A. The over-expression of these genes was validated in vivo by immunohistochemical staining data from clinical glioblastoma samples. Immunohistochemical data were available for 65% (19 of 29 of these genes, and 17 of these 19 genes (90% showed a typical outlier staining pattern. Furthermore, raltitrexed, a specific inhibitor of TYMS used in the therapy of tumour types other than glioblastoma, also effectively blocked cell proliferation in glioblastoma cell lines, thus highlighting this outlier gene candidate as a potential therapeutic target. CONCLUSIONS/SIGNIFICANCE: Taken together, these results support the GTI as a novel approach to identify potential oncogene outliers and drug targets. The algorithm is

  14. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  16. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  17. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  18. Harvard Aging Brain Study: Dataset and accessibility.

    Science.gov (United States)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G; Chatwal, Jasmeer P; Papp, Kathryn V; Amariglio, Rebecca E; Blacker, Deborah; Rentz, Dorene M; Johnson, Keith A; Sperling, Reisa A; Schultz, Aaron P

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging. To promote more extensive analyses, imaging data was designed to be compatible with other publicly available datasets. A cloud-based system enables access to interested researchers with blinded data available contingent upon completion of a data usage agreement and administrative approval. Data collection is ongoing and currently in its fifth year. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. ESTABLISHING A PUBLISHING OUTFIT IN NIGERIA EMENYONU ...

    African Journals Online (AJOL)

    CIU

    stakeholders in the publishing industry,the legal environment of publishing, ... retailers. Publishing is a peculiar form of business for which a special group of very .... The publisher should provide furniture and fittings for the staff and intended ...

  20. Identifying frauds and anomalies in Medicare-B dataset.

    Science.gov (United States)

    Jiwon Seo; Mendelevitch, Ofer

    2017-07-01

    Healthcare industry is growing at a rapid rate to reach a market value of $7 trillion dollars world wide. At the same time, fraud in healthcare is becoming a serious problem, amounting to 5% of the total healthcare spending, or $100 billion dollars each year in US. Manually detecting healthcare fraud requires much effort. Recently, machine learning and data mining techniques are applied to automatically detect healthcare frauds. This paper proposes a novel PageRank-based algorithm to detect healthcare frauds and anomalies. We apply the algorithm to Medicare-B dataset, a real-life data with 10 million healthcare insurance claims. The algorithm successfully identifies tens of previously unreported anomalies.

  1. Adventures in semantic publishing: exemplar semantic enhancements of a research article.

    Directory of Open Access Journals (Sweden)

    David Shotton

    2009-04-01

    Full Text Available Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit "Citations in Context", and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/. The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0000228.x001, presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own

  2. Do Higher Government Wages Reduce Corruption? Evidence Based on a Novel Dataset

    OpenAIRE

    Le, Van-Ha; de Haan, Jakob; Dietzenbacher, Erik

    2013-01-01

    This paper employs a novel dataset on government wages to investigate the relationship between government remuneration policy and corruption. Our dataset, as derived from national household or labor surveys, is more reliable than the data on government wages as used in previous research. When the relationship between government wages and corruption is modeled to vary with the level of income, we find that the impact of government wages on corruption is strong at relatively low-income levels.

  3. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  4. Thesaurus Dataset of Educational Technology in Chinese

    Science.gov (United States)

    Wu, Linjing; Liu, Qingtang; Zhao, Gang; Huang, Huan; Huang, Tao

    2015-01-01

    The thesaurus dataset of educational technology is a knowledge description of educational technology in Chinese. The aims of this thesaurus were to collect the subject terms in the domain of educational technology, facilitate the standardization of terminology and promote the communication between Chinese researchers and scholars from various…

  5. A meditation on the use of hands. Previously published in Scandinavian Journal of Occupational Therapy 1995; 2: 153-166.

    Science.gov (United States)

    Kielhofner, G

    2014-01-01

    The theme of mind-body unity is fundamental to occupational therapy. Nonetheless, the field continues to embrace a dualism of mind and body. This dualism persists because the field views the body only as an object, ignoring how the body is lived. Drawing upon phenomenological discussions of bodily experience, this paper illustrates how the lived body is a locus of intelligence, intentionality, adaptiveness, and experience. It also considers the bodily ground of motivation and thought and discusses how the body constitutes and incorporates its world. Finally, the paper considers implications of the lived body for therapy.

  6. Outcome of unicompartmental knee arthroplasty in octogenarians with tricompartmental osteoarthritis: A longer followup of previously published report

    Directory of Open Access Journals (Sweden)

    Sanjiv KS Marya

    2013-01-01

    Full Text Available Background: Unicompartmental knee arthroplasty (UKA has specific indications, producing excellent results. It, however, has a limited lifespan and needs eventual conversion to total knee arthroplasty (TKA. It is, therefore, a temporizing procedure in select active young patients with advanced unicompartmental osteoarthritis (UCOA. Being a less morbid procedure it is suggested as an alternative in the very elderly patients with tricompartmental osteoarthritis (TCOA. We performed UKA in a series of 45 octogenarians with TCOA predominant medial compartment osteoarthritis (MCOA and analyzed the results. Materials and Methods: Forty five octogenarian patients with TCOA predominant MCOA underwent UKA (19 bilateral from January 2002 to January 2012. All had similar preoperative work-up, surgical approach, procedure, implants and postoperative protocol. Clinicoradiological assessment was done at 3-monthly intervals for the first year, then yearly till the last followup (average 72 months, range 8-128 months. Results were evaluated using the knee society scores (KSS, satisfaction index [using the visual analogue scale (VAS] and orthogonal radiographs (for loosening, subsidence, lysis or implant wear. Resurgery for any cause was considered failure. Results: Four patients (six knees died due to medical conditions, two patients (three knees were lost to followup, and these were excluded from the final analysis. Barring two failures, all the remaining patients were pain-free and performing well at the final followup. Indications for resurgery were: medial femoral condyle fracture needing fixation subsequent conversion to TKA at 2 years (n=1 and progression of arthritis and pain leading to revision TKA at 6 years (n=1. Conclusion: UKA has shown successful outcomes with regards to pain relief and function with 96.4% implant survival and 94.9% good or excellent outcomes. Due to lower demands, early rehabilitation, less morbidity, and relative short life expectancy, UKA can successfully manage TCOA in the octogenarians.

  7. Electrons, Electronic Publishing, and Electronic Display.

    Science.gov (United States)

    Brownrigg, Edwin B.; Lynch, Clifford A.

    1985-01-01

    Provides a perspective on electronic publishing by distinguishing between "Newtonian" publishing and "quantum-mechanical" publishing. Highlights include media and publishing, works delivered through electronic media, electronic publishing and the printed word, management of intellectual property, and recent copyright-law issues…

  8. Sharing Video Datasets in Design Research

    DEFF Research Database (Denmark)

    Christensen, Bo; Abildgaard, Sille Julie Jøhnk

    2017-01-01

    This paper examines how design researchers, design practitioners and design education can benefit from sharing a dataset. We present the Design Thinking Research Symposium 11 (DTRS11) as an exemplary project that implied sharing video data of design processes and design activity in natural settings...... with a large group of fellow academics from the international community of Design Thinking Research, for the purpose of facilitating research collaboration and communication within the field of Design and Design Thinking. This approach emphasizes the social and collaborative aspects of design research, where...... a multitude of appropriate perspectives and methods may be utilized in analyzing and discussing the singular dataset. The shared data is, from this perspective, understood as a design object in itself, which facilitates new ways of working, collaborating, studying, learning and educating within the expanding...

  9. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  10. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  11. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  12. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  13. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi; Shang, Shuo

    2018-01-01

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  14. Approximate spatio-temporal top-k publish/subscribe

    KAUST Repository

    Chen, Lisi

    2018-04-26

    Location-based publish/subscribe plays a significant role in mobile information disseminations. In this light, we propose and study a novel problem of processing location-based top-k subscriptions over spatio-temporal data streams. We define a new type of approximate location-based top-k subscription, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription, that continuously feeds users with relevant spatio-temporal messages by considering textual similarity, spatial proximity, and information freshness. Different from existing location-based top-k subscriptions, Approximate Temporal Spatial-Keyword Top-k (ATSK) Subscription can automatically adjust the triggering condition by taking the triggering score of other subscriptions into account. The group filtering efficacy can be substantially improved by sacrificing the publishing result quality with a bounded guarantee. We conduct extensive experiments on two real datasets to demonstrate the performance of the developed solutions.

  15. Why should we publish Linked Data?

    Science.gov (United States)

    Blower, Jon; Riechert, Maik; Koubarakis, Manolis; Pace, Nino

    2016-04-01

    We use the Web every day to access information from all kinds of different sources. But the complexity and diversity of scientific data mean that discovering accessing and interpreting data remains a large challenge to researchers, decision-makers and other users. Different sources of useful information on data, algorithms, instruments and publications are scattered around the Web. How can we link all these things together to help users to better understand and exploit earth science data? How can we combine scientific data with other relevant data sources, when standards for describing and sharing data vary so widely between communities? "Linked Data" is a term that describes a set of standards and "best practices" for sharing data on the Web (http://www.w3.org/standards/semanticweb/data). These principles can be summarised as follows: 1. Create unique and persistent identifiers for the important "things" in a community (e.g. datasets, publications, algorithms, instruments). 2. Allow users to "look up" these identifiers on the web to find out more information about them. 3. Make this information machine-readable in a community-neutral format (such as RDF, Resource Description Framework). 4. Within this information, embed links to other things and concepts and say how these are related. 5. Optionally, provide web service interfaces to allow the user to perform sophisticated queries over this information (using a language such as SPARQL). The promise of Linked Data is that, through these techniques, data will be more discoverable, more comprehensible and more usable by different communities, not just the community that produced the data. As a result, many data providers (particularly public-sector institutions) are now publishing data in this way. However, this area is still in its infancy in terms of real-world applications. Data users need guidance and tools to help them use Linked Data. Data providers need reassurance that the investments they are making in

  16. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  17. Solutions for research data from a publisher's perspective

    Science.gov (United States)

    Cotroneo, P.

    2015-12-01

    Sharing research data has the potential to make research more efficient and reproducible. Elsevier has developed several initiatives to address the different needs of research data users. These include PANGEA Linked data, which provides geo-referenced, citable datasets from earth and life sciences, archived as supplementary data from publications by the PANGEA data repository; Mendeley Data, which allows users to freely upload and share their data; a database linking program that creates links between articles on ScienceDirect and datasets held in external data repositories such as EarthRef and EarthChem; a pilot for searching for research data through a map interface; an open data pilot that allows authors publishing in Elsevier journals to store and share research data and make this publicly available as a supplementary file alongside their article; and data journals, including Data in Brief, which allow researchers to share their data open access. Through these initiatives, researchers are not only encouraged to share their research data, but also supported in optimizing their research data management. By making data more readily citable and visible, and hence generating citations for authors, these initiatives also aim to ensure that researchers get the recognition they deserve for publishing their data.

  18. Accounting for inertia in modal choices: some new evidence using a RP/SP dataset

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Manca, Francesco

    2011-01-01

    effect is stable along the SP experiments. Inertia has been studied more extensively with panel datasets, but few investigations have used RP/SP datasets. In this paper we extend previous work in several ways. We test and compare several ways of measuring inertia, including measures that have been...... proposed for both short and long RP panel datasets. We also explore new measures of inertia to test for the effect of “learning” (in the sense of acquiring experience or getting more familiar with) along the SP experiment and we disentangle this effect from the pure inertia effect. A mixed logit model...... is used that allows us to account for both systematic and random taste variations in the inertia effect and for correlations among RP and SP observations. Finally we explore the relation between the utility specification (especially in the SP dataset) and the role of inertia in explaining current choices....

  19. Desktop Publishing Choices: Making an Appropriate Decision.

    Science.gov (United States)

    Crawford, Walt

    1991-01-01

    Discusses various choices available for desktop publishing systems. Four categories of software are described, including advanced word processing, graphics software, low-end desktop publishing, and mainstream desktop publishing; appropriate hardware is considered; and selection guidelines are offered, including current and future publishing needs,…

  20. MUCH Electronic Publishing Environment: Principles and Practices.

    Science.gov (United States)

    Min, Zheng; Rada, Roy

    1994-01-01

    Discusses the electronic publishing system called Many Using and Creating Hypermedia (MUCH). The MUCH system supports collaborative authoring; reuse; formatting and printing; management; hypermedia publishing and delivery; and interchange. This article examines electronic publishing environments; the MUCH environment; publishing activities; and…

  1. Self-Published Books: An Empirical "Snapshot"

    Science.gov (United States)

    Bradley, Jana; Fulton, Bruce; Helm, Marlene

    2012-01-01

    The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…

  2. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  3. Development of a SPARK Training Dataset

    International Nuclear Information System (INIS)

    Sayre, Amanda M.; Olson, Jarrod R.

    2015-01-01

    In its first five years, the National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK's intended analysis capability. The analysis demonstration sought to answer

  4. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  5. ISC-EHB: Reconstruction of a robust earthquake dataset

    Science.gov (United States)

    Weston, J.; Engdahl, E. R.; Harris, J.; Di Giacomo, D.; Storchak, D. A.

    2018-04-01

    The EHB Bulletin of hypocentres and associated travel-time residuals was originally developed with procedures described by Engdahl, Van der Hilst and Buland (1998) and currently ends in 2008. It is a widely used seismological dataset, which is now expanded and reconstructed, partly by exploiting updated procedures at the International Seismological Centre (ISC), to produce the ISC-EHB. The reconstruction begins in the modern period (2000-2013) to which new and more rigorous procedures for event selection, data preparation, processing, and relocation are applied. The selection criteria minimise the location bias produced by unmodelled 3D Earth structure, resulting in events that are relatively well located in any given region. Depths of the selected events are significantly improved by a more comprehensive review of near station and secondary phase travel-time residuals based on ISC data, especially for the depth phases pP, pwP and sP, as well as by a rigorous review of the event depths in subduction zone cross sections. The resulting cross sections and associated maps are shown to provide details of seismicity in subduction zones in much greater detail than previously achievable. The new ISC-EHB dataset will be especially useful for global seismicity studies and high-frequency regional and global tomographic inversions.

  6. Integrated remotely sensed datasets for disaster management

    Science.gov (United States)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  7. Publishing Platform for Aerial Orthophoto Maps, the Complete Stack

    Science.gov (United States)

    Čepický, J.; Čapek, L.

    2016-06-01

    When creating set of orthophoto maps from mosaic compositions, using airborne systems, such as popular drones, we need to publish results of the work to users. Several steps need to be performed in order get large scale raster data published. As first step, data have to be shared as service (OGC WMS as view service, OGC WCS as download service). But for some applications, OGC WMTS is handy as well, for faster view of the data. Finally the data have to become a part of web mapping application, so that they can be used and evaluated by non-technical users. In this talk, we would like to present automated line of those steps, where user puts in orthophoto image and as a result, OGC Open Web Services are published as well as web mapping application with the data. The web mapping application can be used as standard presentation platform for such type of big raster data to generic user. The publishing platform - Geosense online map information system - can be also used for combination of data from various resources and for creating of unique map compositions and as input for better interpretations of photographed phenomenons. The whole process is successfully tested with eBee drone with raster data resolution 1.5-4 cm/px on many areas and result is also used for creation of derived datasets, usually suited for property management - the records of roads, pavements, traffic signs, public lighting, sewage system, grave locations, and others.

  8. Structure and navigation for electronic publishing

    Science.gov (United States)

    Tillinghast, John; Beretta, Giordano B.

    1998-01-01

    The sudden explosion of the World Wide Web as a new publication medium has given a dramatic boost to the electronic publishing industry, which previously was a limited market centered around CD-ROMs and on-line databases. While the phenomenon has parallels to the advent of the tabloid press in the middle of last century, the electronic nature of the medium brings with it the typical characteristic of 4th wave media, namely the acceleration in its propagation speed and the volume of information. Consequently, e-publications are even flatter than print media; Shakespeare's Romeo and Juliet share the same computer screen with a home-made plagiarized copy of Deep Throat. The most touted tool for locating useful information on the World Wide Web is the search engine. However, due to the medium's flatness, sought information is drowned in a sea of useless information. A better solution is to build tools that allow authors to structure information so that it can easily be navigated. We experimented with the use of ontologies as a tool to formulate structures for information about a specific topic, so that related concepts are placed in adjacent locations and can easily be navigated using simple and ergonomic user models. We describe our effort in building a World Wide Web based photo album that is shared among a small network of people.

  9. Treatment Planning Constraints to Avoid Xerostomia in Head-and-Neck Radiotherapy: An Independent Test of QUANTEC Criteria Using a Prospectively Collected Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Moiseenko, Vitali, E-mail: vmoiseenko@bccancer.bc.ca [Department of Medical Physics, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada); Wu, Jonn [Department of Radiation Oncology, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada); Hovan, Allan [Department of Oral Oncology, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada); Saleh, Ziad; Apte, Aditya; Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Harrow, Stephen [Department of Radiation Oncology, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada); Rabuka, Carman; Muggli, Adam [Department of Oral Oncology, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada); Thompson, Anna [Department of Radiation Oncology, Vancouver Cancer Centre, British Columbia Cancer Agency, Vancouver, BC (Canada)

    2012-03-01

    Purpose: The severe reduction of salivary function (xerostomia) is a common complication after radiation therapy for head-and-neck cancer. Consequently, guidelines to ensure adequate function based on parotid gland tolerance dose-volume parameters have been suggested by the QUANTEC group and by Ortholan et al. We perform a validation test of these guidelines against a prospectively collected dataset and compared with a previously published dataset. Methods and Materials: Whole-mouth stimulated salivary flow data from 66 head-and-neck cancer patients treated with radiotherapy at the British Columbia Cancer Agency (BCCA) were measured, and treatment planning data were abstracted. Flow measurements were collected from 50 patients at 3 months, and 60 patients at 12-month follow-up. Previously published data from a second institution, Washington University in St. Louis (WUSTL), were used for comparison. A logistic model was used to describe the incidence of Grade 4 xerostomia as a function of the mean dose of the spared parotid gland. The rate of correctly predicting the lack of xerostomia (negative predictive value [NPV]) was computed for both the QUANTEC constraints and Ortholan et al. recommendation to constrain the total volume of both glands receiving more than 40 Gy to less than 33%. Results: Both datasets showed a rate of xerostomia of less than 20% when the mean dose to the least-irradiated parotid gland is kept to less than 20 Gy. Logistic model parameters for the incidence of xerostomia at 12 months after therapy, based on the least-irradiated gland, were D{sub 50} = 32.4 Gy and and {gamma} = 0.97. NPVs for QUANTEC guideline were 94% (BCCA data), and 90% (WUSTL data). For Ortholan et al. guideline NPVs were 85% (BCCA) and 86% (WUSTL). Conclusion: These data confirm that the QUANTEC guideline effectively avoids xerostomia, and this is somewhat more effective than constraints on the volume receiving more than 40 Gy.

  10. Treatment Planning Constraints to Avoid Xerostomia in Head-and-Neck Radiotherapy: An Independent Test of QUANTEC Criteria Using a Prospectively Collected Dataset

    International Nuclear Information System (INIS)

    Moiseenko, Vitali; Wu, Jonn; Hovan, Allan; Saleh, Ziad; Apte, Aditya; Deasy, Joseph O.; Harrow, Stephen; Rabuka, Carman; Muggli, Adam; Thompson, Anna

    2012-01-01

    Purpose: The severe reduction of salivary function (xerostomia) is a common complication after radiation therapy for head-and-neck cancer. Consequently, guidelines to ensure adequate function based on parotid gland tolerance dose–volume parameters have been suggested by the QUANTEC group and by Ortholan et al. We perform a validation test of these guidelines against a prospectively collected dataset and compared with a previously published dataset. Methods and Materials: Whole-mouth stimulated salivary flow data from 66 head-and-neck cancer patients treated with radiotherapy at the British Columbia Cancer Agency (BCCA) were measured, and treatment planning data were abstracted. Flow measurements were collected from 50 patients at 3 months, and 60 patients at 12-month follow-up. Previously published data from a second institution, Washington University in St. Louis (WUSTL), were used for comparison. A logistic model was used to describe the incidence of Grade 4 xerostomia as a function of the mean dose of the spared parotid gland. The rate of correctly predicting the lack of xerostomia (negative predictive value [NPV]) was computed for both the QUANTEC constraints and Ortholan et al. recommendation to constrain the total volume of both glands receiving more than 40 Gy to less than 33%. Results: Both datasets showed a rate of xerostomia of less than 20% when the mean dose to the least-irradiated parotid gland is kept to less than 20 Gy. Logistic model parameters for the incidence of xerostomia at 12 months after therapy, based on the least-irradiated gland, were D 50 = 32.4 Gy and and γ = 0.97. NPVs for QUANTEC guideline were 94% (BCCA data), and 90% (WUSTL data). For Ortholan et al. guideline NPVs were 85% (BCCA) and 86% (WUSTL). Conclusion: These data confirm that the QUANTEC guideline effectively avoids xerostomia, and this is somewhat more effective than constraints on the volume receiving more than 40 Gy.

  11. Types of Open Access Publishers in Scopus

    Directory of Open Access Journals (Sweden)

    David Solomon

    2013-05-01

    Full Text Available This study assessed characteristics of publishers who published 2010 open access (OA journals indexed in Scopus. Publishers were categorized into six types; professional, society, university, scholar/researcher, government, and other organizations. Type of publisher was broken down by number of journals/articles published in 2010, funding model, location, discipline and whether the journal was born or converted to OA. Universities and societies accounted for 50% of the journals and 43% of the articles published. Professional publisher accounted for a third of the journals and 42% of the articles. With the exception of professional and scholar/researcher publishers, most journals were originally subscription journals that made at least their digital version freely available. Arts, humanities and social science journals are largely published by societies and universities outside the major publishing countries. Professional OA publishing is most common in biomedicine, mathematics, the sciences and engineering. Approximately a quarter of the journals are hosted on national/international platforms, in Latin America, Eastern Europe and Asia largely published by universities and societies without the need for publishing fees. This type of collaboration between governments, universities and/or societies may be an effective means of expanding open access publications.

  12. Comprehensive comparison of large-scale tissue expression datasets

    DEFF Research Database (Denmark)

    Santos Delgado, Alberto; Tsafou, Kalliopi; Stolte, Christian

    2015-01-01

    a comprehensive evaluation of tissue expression data from a variety of experimental techniques and show that these agree surprisingly well with each other and with results from literature curation and text mining. We further found that most datasets support the assumed but not demonstrated distinction between......For tissues to carry out their functions, they rely on the right proteins to be present. Several high-throughput technologies have been used to map out which proteins are expressed in which tissues; however, the data have not previously been systematically compared and integrated. We present......://tissues.jensenlab.org), which makes all the scored and integrated data available through a single user-friendly web interface....

  13. What was hidden in the Publisher's Archive

    DEFF Research Database (Denmark)

    Mai, Anne-Marie

    2015-01-01

    On the Danish Author Elsa Gress and her correspondence on American Literature with the Publisher, K. E. Hermann, Arena.......On the Danish Author Elsa Gress and her correspondence on American Literature with the Publisher, K. E. Hermann, Arena....

  14. Desktop Publishing: Changing Technology, Changing Occupations.

    Science.gov (United States)

    Stanton, Michael

    1991-01-01

    Describes desktop publishing (DTP) and its place in corporations. Lists job titles of those working in desktop publishing and describes DTP as it is taught at secondary and postsecondary levels and by private trainers. (JOW)

  15. Making the Leap to Desktop Publishing.

    Science.gov (United States)

    Schleifer, Neal

    1986-01-01

    Describes one teacher's approach to desktop publishing. Explains how the Macintosh and LaserWriter were used in the publication of a school newspaper. Guidelines are offered to teachers for the establishment of a desktop publishing lab. (ML)

  16. Promises and Realities of Desktop Publishing.

    Science.gov (United States)

    Thompson, Patricia A.; Craig, Robert L.

    1991-01-01

    Examines the underlying assumptions of the rhetoric of desktop publishing promoters. Suggests four criteria to help educators provide insights into issues and challenges concerning desktop publishing technology that design students will face on the job. (MG)

  17. Electronic Journal Publishing: Observations from Inside.

    Science.gov (United States)

    Hunter, Karen

    1998-01-01

    Focuses on electronic scholarly-journal publishing. Discusses characteristics of current academic electronic publishing; effects of the World Wide Web; user needs and positions of academic libraries; costs; and decisions of research librarians that drive the industry. (AEF)

  18. Dataset of anomalies and malicious acts in a cyber-physical subsystem.

    Science.gov (United States)

    Laso, Pedro Merino; Brosset, David; Puentes, John

    2017-10-01

    This article presents a dataset produced to investigate how data and information quality estimations enable to detect aNomalies and malicious acts in cyber-physical systems. Data were acquired making use of a cyber-physical subsystem consisting of liquid containers for fuel or water, along with its automated control and data acquisition infrastructure. Described data consist of temporal series representing five operational scenarios - Normal, aNomalies, breakdown, sabotages, and cyber-attacks - corresponding to 15 different real situations. The dataset is publicly available in the .zip file published with the article, to investigate and compare faulty operation detection and characterization methods for cyber-physical systems.

  19. Basics of Desktop Publishing. Second Edition.

    Science.gov (United States)

    Beeby, Ellen; Crummett, Jerrie

    This document contains teacher and student materials for a basic course in desktop publishing. Six units of instruction cover the following: (1) introduction to desktop publishing; (2) desktop publishing systems; (3) software; (4) type selection; (5) document design; and (6) layout. The teacher edition contains some or all of the following…

  20. The Changing Business of Scholarly Publishing.

    Science.gov (United States)

    Hunter, Karen

    1993-01-01

    Discussion of changes and trends in scholarly publishing highlights monographs; journals; user-centered publishing; electronic products and services, including adding value, marketing strategies, and new pricing systems; changing attitudes regarding copyright; trends in publishing industry reorganization; and impacts on research libraries. (LRW)

  1. Strontium removal jar test dataset for all figures and tables.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The datasets where used to generate data to demonstrate strontium removal under various water quality and treatment conditions. This dataset is associated with the...

  2. Chemical elements in the environment: multi-element geochemical datasets from continental to national scale surveys on four continents

    Science.gov (United States)

    Caritat, Patrice de; Reimann, Clemens; Smith, David; Wang, Xueqiu

    2017-01-01

    During the last 10-20 years, Geological Surveys around the world have undertaken a major effort towards delivering fully harmonized and tightly quality-controlled low-density multi-element soil geochemical maps and datasets of vast regions including up to whole continents. Concentrations of between 45 and 60 elements commonly have been determined in a variety of different regolith types (e.g., sediment, soil). The multi-element datasets are published as complete geochemical atlases and made available to the general public. Several other geochemical datasets covering smaller areas but generally at a higher spatial density are also available. These datasets may, however, not be found by superficial internet-based searches because the elements are not mentioned individually either in the title or in the keyword lists of the original references. This publication attempts to increase the visibility and discoverability of these fundamental background datasets covering large areas up to whole continents.

  3. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    Science.gov (United States)

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Use of Maple Seeding Canopy Reflectance Dataset for Validation of SART/LEAFMOD Radiative Transfer Model

    Science.gov (United States)

    Bond, Barbara J.; Peterson, David L.

    1999-01-01

    This project was a collaborative effort by researchers at ARC, OSU and the University of Arizona. The goal was to use a dataset obtained from a previous study to "empirically validate a new canopy radiative-transfer model (SART) which incorporates a recently-developed leaf-level model (LEAFMOD)". The document includes a short research summary.

  5. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  6. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  7. Internationally coordinated glacier monitoring: strategy and datasets

    Science.gov (United States)

    Hoelzle, Martin; Armstrong, Richard; Fetterer, Florence; Gärtner-Roer, Isabelle; Haeberli, Wilfried; Kääb, Andreas; Kargel, Jeff; Nussbaumer, Samuel; Paul, Frank; Raup, Bruce; Zemp, Michael

    2014-05-01

    (c) the Randolph Glacier Inventory (RGI), a new and globally complete digital dataset of outlines from about 180,000 glaciers with some meta-information, which has been used for many applications relating to the IPCC AR5 report. Concerning glacier changes, a database (Fluctuations of Glaciers) exists containing information about mass balance, front variations including past reconstructed time series, geodetic changes and special events. Annual mass balance reporting contains information for about 125 glaciers with a subset of 37 glaciers with continuous observational series since 1980 or earlier. Front variation observations of around 1800 glaciers are available from most of the mountain ranges world-wide. This database was recently updated with 26 glaciers having an unprecedented dataset of length changes from from reconstructions of well-dated historical evidence going back as far as the 16th century. Geodetic observations of about 430 glaciers are available. The database is completed by a dataset containing information on special events including glacier surges, glacier lake outbursts, ice avalanches, eruptions of ice-clad volcanoes, etc. related to about 200 glaciers. A special database of glacier photographs contains 13,000 pictures from around 500 glaciers, some of them dating back to the 19th century. A key challenge is to combine and extend the traditional observations with fast evolving datasets from new technologies.

  8. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  9. 2006 Fynmeet sea clutter measurement trial: Datasets

    CSIR Research Space (South Africa)

    Herselman, PLR

    2007-09-06

    Full Text Available -011............................................................................................................................................................................................. 25 iii Dataset CAD14-001 0 5 10 15 20 25 30 35 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-001 2400 2600 2800... 40 10 20 30 40 50 60 70 80 90 R an ge G at e # Time [s] A bs ol ut e R an ge [m ] RCS [dBm2] vs. time and range for f1 = 9.000 GHz - CAD14-002 2400 2600 2800 3000 3200 3400 3600 -30 -25 -20 -15 -10 -5 0 5 10...

  10. Publisher Correction: Resistance to nonribosomal peptide antibiotics mediated by D-stereospecific peptidases.

    Science.gov (United States)

    Li, Yong-Xin; Zhong, Zheng; Hou, Peng; Zhang, Wei-Peng; Qian, Pei-Yuan

    2018-03-07

    In the version of this article originally published, the links and files for the Supplementary Information, including Supplementary Tables 1-5, Supplementary Figures 1-25, Supplementary Note, Supplementary Datasets 1-4 and the Life Sciences Reporting Summary, were missing in the HTML. The error has been corrected in the HTML version of this article.

  11. Peer-review: An IOP Publishing Perspective

    Science.gov (United States)

    Smith, Timothy

    2015-03-01

    Online publishing is challenging, and potentially changing, the role of publishers in both managing the peer-review process and disseminating the work that they publish in meeting contrasting needs from diverse groups of research communities. Recognizing the value of peer-review as a fundamental service to authors and the research community, the underlying principles of managing the process for journals published by IOP Publishing remain unchanged and yet the potential and demand for alternative models exists. This talk will discuss the traditional approach to peer-review placed in the context of this changing demand.

  12. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    Science.gov (United States)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  13. Evolving hard problems: Generating human genetics datasets with a complex etiology

    Directory of Open Access Journals (Sweden)

    Himmelstein Daniel S

    2011-07-01

    Full Text Available Abstract Background A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Results Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. Conclusions This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  14. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  15. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  16. Technical note: An inorganic water chemistry dataset (1972–2011 ...

    African Journals Online (AJOL)

    A national dataset of inorganic chemical data of surface waters (rivers, lakes, and dams) in South Africa is presented and made freely available. The dataset comprises more than 500 000 complete water analyses from 1972 up to 2011, collected from more than 2 000 sample monitoring stations in South Africa. The dataset ...

  17. QSAR ligand dataset for modelling mutagenicity, genotoxicity, and rodent carcinogenicity

    Directory of Open Access Journals (Sweden)

    Davy Guan

    2018-04-01

    Full Text Available Five datasets were constructed from ligand and bioassay result data from the literature. These datasets include bioassay results from the Ames mutagenicity assay, Greenscreen GADD-45a-GFP assay, Syrian Hamster Embryo (SHE assay, and 2 year rat carcinogenicity assay results. These datasets provide information about chemical mutagenicity, genotoxicity and carcinogenicity.

  18. Utilizing the Antarctic Master Directory to find orphan datasets

    Science.gov (United States)

    Bonczkowski, J.; Carbotte, S. M.; Arko, R. A.; Grebas, S. K.

    2011-12-01

    While most Antarctic data are housed at an established disciplinary-specific data repository, there are data types for which no suitable repository exists. In some cases, these "orphan" data, without an appropriate national archive, are served from local servers by the principal investigators who produced the data. There are many pitfalls with data served privately, including the frequent lack of adequate documentation to ensure the data can be understood by others for re-use and the impermanence of personal web sites. For example, if an investigator leaves an institution and the data moves, the link published is no longer accessible. To ensure continued availability of data, submission to long-term national data repositories is needed. As stated in the National Science Foundation Office of Polar Programs (NSF/OPP) Guidelines and Award Conditions for Scientific Data, investigators are obligated to submit their data for curation and long-term preservation; this includes the registration of a dataset description into the Antarctic Master Directory (AMD), http://gcmd.nasa.gov/Data/portals/amd/. The AMD is a Web-based, searchable directory of thousands of dataset descriptions, known as DIF records, submitted by scientists from over 20 countries. It serves as a node of the International Directory Network/Global Change Master Directory (IDN/GCMD). The US Antarctic Program Data Coordination Center (USAP-DCC), http://www.usap-data.org/, funded through NSF/OPP, was established in 2007 to help streamline the process of data submission and DIF record creation. When data does not quite fit within any existing disciplinary repository, it can be registered within the USAP-DCC as the fallback data repository. Within the scope of the USAP-DCC we undertook the challenge of discovering and "rescuing" orphan datasets currently registered within the AMD. In order to find which DIF records led to data served privately, all records relating to US data within the AMD were parsed. After

  19. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  20. Structural Revision of Some Recently Published Iridoid Glucosides

    DEFF Research Database (Denmark)

    Jensen, Søren Rosendal; Calis, Ihsan; Gotfredsen, Charlotte Held

    2007-01-01

    ). Finally, two alleged iridoid galactosides from Buddleja crispa named buddlejosides A and B (12a and 12b) have been shown to be the corresponding glucosides; the former is identical to agnuside (13a) while the latter is 3,4-dihydroxybenzoylaucubin (13b), an iridoid glucoside not previously published...

  1. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  2. Poet's Market, 1997: Where & How To Publish Your Poetry.

    Science.gov (United States)

    Martin, Christine, Ed.; Bentley, Chantelle, Ed.

    This directory provides 1700 listings and evaluations of poetry publishers--300 more than in the previous edition--along with complete submission and contact information. Listings include both domestic and international markets, from mass circulation and literary magazines to small presses and university quarterlies, and contain complete profiles…

  3. fCCAC: functional canonical correlation analysis to evaluate covariance between nucleic acid sequencing datasets.

    Science.gov (United States)

    Madrigal, Pedro

    2017-03-01

    Computational evaluation of variability across DNA or RNA sequencing datasets is a crucial step in genomic science, as it allows both to evaluate reproducibility of biological or technical replicates, and to compare different datasets to identify their potential correlations. Here we present fCCAC, an application of functional canonical correlation analysis to assess covariance of nucleic acid sequencing datasets such as chromatin immunoprecipitation followed by deep sequencing (ChIP-seq). We show how this method differs from other measures of correlation, and exemplify how it can reveal shared covariance between histone modifications and DNA binding proteins, such as the relationship between the H3K4me3 chromatin mark and its epigenetic writers and readers. An R/Bioconductor package is available at http://bioconductor.org/packages/fCCAC/ . pmb59@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. Valuation of large variable annuity portfolios: Monte Carlo simulation and synthetic datasets

    Directory of Open Access Journals (Sweden)

    Gan Guojun

    2017-12-01

    Full Text Available Metamodeling techniques have recently been proposed to address the computational issues related to the valuation of large portfolios of variable annuity contracts. However, it is extremely diffcult, if not impossible, for researchers to obtain real datasets frominsurance companies in order to test their metamodeling techniques on such real datasets and publish the results in academic journals. To facilitate the development and dissemination of research related to the effcient valuation of large variable annuity portfolios, this paper creates a large synthetic portfolio of variable annuity contracts based on the properties of real portfolios of variable annuities and implements a simple Monte Carlo simulation engine for valuing the synthetic portfolio. In addition, this paper presents fair market values and Greeks for the synthetic portfolio of variable annuity contracts that are important quantities for managing the financial risks associated with variable annuities. The resulting datasets can be used by researchers to test and compare the performance of various metamodeling techniques.

  5. Automatic electromagnetic valve for previous vacuum

    International Nuclear Information System (INIS)

    Granados, C. E.; Martin, F.

    1959-01-01

    A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)

  6. Provenance Challenges for Earth Science Dataset Publication

    Science.gov (United States)

    Tilmes, Curt

    2011-01-01

    Modern science is increasingly dependent on computational analysis of very large data sets. Organizing, referencing, publishing those data has become a complex problem. Published research that depends on such data often fails to cite the data in sufficient detail to allow an independent scientist to reproduce the original experiments and analyses. This paper explores some of the challenges related to data identification, equivalence and reproducibility in the domain of data intensive scientific processing. It will use the example of Earth Science satellite data, but the challenges also apply to other domains.

  7. Data Publishing and Sharing Via the THREDDS Data Repository

    Science.gov (United States)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR

  8. Ethical issues in publishing in predatory journals.

    Science.gov (United States)

    Ferris, Lorraine E; Winker, Margaret A

    2017-06-15

    Predatory journals, or journals that charge an article processing charge (APC) to authors, yet do not have the hallmarks of legitimate scholarly journals such as peer review and editing, Editorial Boards, editorial offices, and other editorial standards, pose a number of new ethical issues in journal publishing. This paper discusses ethical issues around predatory journals and publishing in them. These issues include misrepresentation; lack of editorial and publishing standards and practices; academic deception; research and funding wasted; lack of archived content; and undermining confidence in research literature. It is important that the scholarly community, including authors, institutions, editors, and publishers, support the legitimate scholarly research enterprise, and avoid supporting predatory journals by not publishing in them, serving as their editors or on the Editorial Boards, or permitting faculty to knowingly publish in them without consequences.

  9. QlikView Server and Publisher

    CERN Document Server

    Redmond, Stephen

    2014-01-01

    This is a comprehensive guide with a step-by-step approach that enables you to host and manage servers using QlikView Server and QlikView Publisher.If you are a server administrator wanting to learn about how to deploy QlikView Server for server management,analysis and testing, and QlikView Publisher for publishing of business content then this is the perfect book for you. No prior experience with QlikView is expected.

  10. Strategic Brand Management Tools in Publishing

    OpenAIRE

    Pitsaki, Irini

    2011-01-01

    Further to the introduction of the brand concept evolution and theory, as well as the ways these operate in the publishing sector (see paper: Pitsaki, I. 2010), the present paper treats publishing strategies and the tools used to establish them. Publishers often base their brand strategy on classic marketing approaches, such as the marketing mix -product, price, promotion, placement and people. They also direct their products to specific market segments in regard to the type of content and te...

  11. Privacy preserving data anonymization of spontaneous ADE reporting system dataset.

    Science.gov (United States)

    Lin, Wen-Yang; Yang, Duen-Chuan; Wang, Jie-Teng

    2016-07-18

    To facilitate long-term safety surveillance of marketing drugs, many spontaneously reporting systems (SRSs) of ADR events have been established world-wide. Since the data collected by SRSs contain sensitive personal health information that should be protected to prevent the identification of individuals, it procures the issue of privacy preserving data publishing (PPDP), that is, how to sanitize (anonymize) raw data before publishing. Although much work has been done on PPDP, very few studies have focused on protecting privacy of SRS data and none of the anonymization methods is favorable for SRS datasets, due to which contain some characteristics such as rare events, multiple individual records, and multi-valued sensitive attributes. We propose a new privacy model called MS(k, θ (*) )-bounding for protecting published spontaneous ADE reporting data from privacy attacks. Our model has the flexibility of varying privacy thresholds, i.e., θ (*) , for different sensitive values and takes the characteristics of SRS data into consideration. We also propose an anonymization algorithm for sanitizing the raw data to meet the requirements specified through the proposed model. Our algorithm adopts a greedy-based clustering strategy to group the records into clusters, conforming to an innovative anonymization metric aiming to minimize the privacy risk as well as maintain the data utility for ADR detection. Empirical study was conducted using FAERS dataset from 2004Q1 to 2011Q4. We compared our model with four prevailing methods, including k-anonymity, (X, Y)-anonymity, Multi-sensitive l-diversity, and (α, k)-anonymity, evaluated via two measures, Danger Ratio (DR) and Information Loss (IL), and considered three different scenarios of threshold setting for θ (*) , including uniform setting, level-wise setting and frequency-based setting. We also conducted experiments to inspect the impact of anonymized data on the strengths of discovered ADR signals. With all three

  12. Mergers, Acquisitions, and Access: STM Publishing Today

    Science.gov (United States)

    Robertson, Kathleen

    Electronic publishing is changing the fundamentals of the entire printing/delivery/archive system that has served as the distribution mechanism for scientific research over the last century and a half. The merger-mania of the last 20 years, preprint pools, and publishers' licensing and journals-bundling plans are among the phenomena impacting the scientific information field. Science-Technology-Medical (STM) publishing is experiencing a period of intense consolidation and reorganization. This paper gives an overview of the economic factors fueling these trends, the major STM publishers, and the government regulatory bodies that referee this industry in Europe, Canada, and the USA.

  13. Investigation of previously derived Hyades, Coma, and M67 reddenings

    International Nuclear Information System (INIS)

    Taylor, B.J.

    1980-01-01

    New Hyades polarimetry and field star photometry have been obtained to check the Hyades reddening, which was found to be nonzero in a previous paper. The new Hyades polarimetry implies essentially zero reddening; this is also true of polarimetry published by Behr (which was incorrectly interpreted in the previous paper). Four photometric techniques which are presumed to be insensitive to blanketing are used to compare the Hyades to nearby field stars; these four techniques also yield essentially zero reddening. When all of these results are combined with others which the author has previously published and a simultaneous solution for the Hyades, Coma, and M67 reddenings is made, the results are E (B-V) =3 +- 2 (sigma) mmag, -1 +- 3 (sigma) mmag, and 46 +- 6 (sigma) mmag, respectively. No support for a nonzero Hyades reddening is offered by the new results. When the newly obtained reddenings for the Hyades, Coma, and M67 are compared with results from techniques given by Crawford and by users of the David Dunlap Observatory photometric system, no differences between the new and other reddenings are found which are larger than about 2 sigma. The author had previously found that the M67 main-sequence stars have about the same blanketing as that of Coma and less blanketing than the Hyades; this conclusion is essentially unchanged by the revised reddenings

  14. False gold: Safely navigating open access publishing to avoid predatory publishers and journals.

    Science.gov (United States)

    McCann, Terence V; Polacsek, Meg

    2018-04-01

    The aim of this study was to review and discuss predatory open access publishing in the context of nursing and midwifery and develop a set of guidelines that serve as a framework to help clinicians, educators and researchers avoid predatory publishers. Open access publishing is increasingly common across all academic disciplines. However, this publishing model is vulnerable to exploitation by predatory publishers, posing a threat to nursing and midwifery scholarship and practice. Guidelines are needed to help researchers recognize predatory journals and publishers and understand the negative consequences of publishing in them. Discussion paper. A literature search of BioMed Central, CINAHL, MEDLINE with Full Text and PubMed for terms related to predatory publishing, published in the period 2007-2017. Lack of awareness of the risks and pressure to publish in international journals, may result in nursing and midwifery researchers publishing their work in dubious open access journals. Caution should be taken prior to writing and submitting a paper, to avoid predatory publishers. The advantage of open access publishing is that it provides readers with access to peer-reviewed research as soon as it is published online. However, predatory publishers use deceptive methods to exploit open access publishing for their own profit. Clear guidelines are needed to help researchers navigate safely open access publishing. A deeper understanding of the risks of predatory publishing is needed. Clear guidelines should be followed by nursing and midwifery researchers seeking to publish their work in open access journals. © 2017 John Wiley & Sons Ltd.

  15. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  16. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  17. Academic Publishing: Making the Implicit Explicit

    Directory of Open Access Journals (Sweden)

    Cecile Badenhorst

    2016-07-01

    Full Text Available For doctoral students, publishing in peer-reviewed journals is a task many face with anxiety and trepidation. The world of publishing, from choosing a journal, negotiating with editors and navigating reviewers’ responses is a bewildering place. Looking in from the outside, it seems that successful and productive academic writers have knowledge that is inaccessible to novice scholars. While there is a growing literature on writing for scholarly publication, many of these publications promote writing and publishing as a straightforward activity that anyone can achieve if they follow the rules. We argue that the specific and situated contexts in which academic writers negotiate publishing practices is more complicated and messy. In this paper, we attempt to make explicit our publishing processes to highlight the complex nature of publishing. We use autoethnographic narratives to provide discussion points and insights into the challenges of publishing peer reviewed articles. One narrative is by a doctoral student at the beginning of her publishing career, who expresses her desires, concerns and anxieties about writing for publication. The other narrative focuses on the publishing practices of a more experienced academic writer. Both are international scholars working in the Canadian context. The purpose of this paper is to explore academic publishing through the juxtaposition of these two narratives to make explicit some of the more implicit processes. Four themes emerge from these narratives. To publish successfully, academic writers need: (1 to be discourse analysts; (2 to have a critical competence; (3 to have writing fluency; and (4 to be emotionally intelligent.

  18. Privacy-Preserving Data Publishing An Overview

    CERN Document Server

    Wong, Raymond Chi-Wing

    2010-01-01

    Privacy preservation has become a major issue in many data analysis applications. When a data set is released to other parties for data analysis, privacy-preserving techniques are often required to reduce the possibility of identifying sensitive information about individuals. For example, in medical data, sensitive information can be the fact that a particular patient suffers from HIV. In spatial data, sensitive information can be a specific location of an individual. In web surfing data, the information that a user browses certain websites may be considered sensitive. Consider a dataset conta

  19. Publish or perish: authorship and peer review

    Science.gov (United States)

    Publish or perish is defined in Wikipedia as the pressure to publish work constantly to further or sustain one’s career in academia. This is an apt description given that refereed scientific publications are the currency of science and the primary means for broad dissemination of knowledge. Professi...

  20. Open Access Publishing: What Authors Want

    Science.gov (United States)

    Nariani, Rajiv; Fernandez, Leila

    2012-01-01

    Campus-based open access author funds are being considered by many academic libraries as a way to support authors publishing in open access journals. Article processing fees for open access have been introduced recently by publishers and have not yet been widely accepted by authors. Few studies have surveyed authors on their reasons for publishing…

  1. Publisher Correction: Invisible Trojan-horse attack

    DEFF Research Database (Denmark)

    Sajeed, Shihan; Minshull, Carter; Jain, Nitin

    2017-01-01

    A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.......A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper....

  2. The cost of publishing in Danish astronomy

    DEFF Research Database (Denmark)

    Dorch, Bertil F.

    I investigate the cost of publishing in Danish astronomy on a fine scale, including all direct publication costs: The figures show how the annual number of publications with authors from Denmark in astronomy journals increased by a factor approximately four during 15 years (Elsevier’s Scopus...... database), and the increase of the corresponding potential (maximum) cost of publishing....

  3. Pages from the Desktop: Desktop Publishing Today.

    Science.gov (United States)

    Crawford, Walt

    1994-01-01

    Discusses changes that have made desktop publishing appealing and reasonably priced. Hardware, software, and printer options for getting started and moving on, typeface developments, and the key characteristics of desktop publishing are described. The author's notes on 33 articles from the personal computing literature from January-March 1994 are…

  4. Desktop Publishing for the Gifted/Talented.

    Science.gov (United States)

    Hamilton, Wayne

    1987-01-01

    Examines the nature of desktop publishing and how it can be used in the classroom for gifted/talented students. Characteristics and special needs of such students are identified, and it is argued that desktop publishing addresses those needs, particularly with regard to creativity. Twenty-six references are provided. (MES)

  5. Equity for open-access journal publishing.

    Directory of Open Access Journals (Sweden)

    Stuart M Shieber

    2009-08-01

    Full Text Available Open-access journals, which provide access to their scholarly articles freely and without limitations, are at a systematic disadvantage relative to traditional closed-access journal publishing and its subscription-based business model. A simple, cost-effective remedy to this inequity could put open-access publishing on a path to become a sustainable, efficient system.

  6. 20 CFR 902.3 - Published information.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Published information. 902.3 Section 902.3 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES RULES REGARDING AVAILABILITY OF INFORMATION § 902.3 Published information. (a) Federal Register. Pursuant to sections 552 and 553 of title 5 of the...

  7. Publishers' Sales Strategies: A Questionable Business.

    Science.gov (United States)

    Eaglen, Audrey B.

    1988-01-01

    Speed, fill rate, and discount are reasons why it is often preferable for libraries to order directly from publishers rather than through a distributor. Nevertheless, some publishers have decided not to accept orders from libraries and schools. This has had a deleterious effect on libraries and library collections. (MES)

  8. Scientific publishing: some food for thought

    Directory of Open Access Journals (Sweden)

    Vittorio Bo

    2007-03-01

    Full Text Available Scientific publishing, here to be considered in a broader sense, as publishing of both specialised scientific journals and science popularisation works addressed to a wider audience, has been sailing for some years on troubled waters. To gather some possible food for thought is the purpose of this brief article.

  9. Electronic Publishing in Science: Changes and Risks.

    Science.gov (United States)

    Kinne, Otto

    1999-01-01

    Discussion of the Internet and the guidance of the World Wide Web Consortium focuses on scientific communication and electronic publishing. Considers the speed of communicating and disseminating information; quality issues; cost; library subscriptions; publishers; and risks and concerns, including the role of editors and reviewers or referees.…

  10. Another Interface: Electronic Publishing and Technical Services.

    Science.gov (United States)

    Yamamoto, Rumi

    1986-01-01

    Discusses the problems of assimilating electronic publishing within the technical services area of academic libraries: whether to consider electronic journals as acquisitions; how to catalog them; whether to charge users for access to them; and how to preserve online publications for future research. Future trends in electronic publishing are…

  11. Electronic Publishing: Introduction to This Issue.

    Science.gov (United States)

    Siegel, Martin A.

    1994-01-01

    Provides an overview of this special issue that addresses the possibilities and implications of electronic publishing and information dissemination as key components of effective education. Highlights include the theory and framework of electronic publishing; differences between electronic text and print; development of new educational materials;…

  12. A dataset for examining trends in publication of new Australian insects

    Directory of Open Access Journals (Sweden)

    Robert Mesibov

    2014-07-01

    Full Text Available Australian Faunal Directory data were used to create a new, publicly available dataset, nai50, which lists 18318 species and subspecies names for Australian insects described in the period 1961–2010, together with associated publishing data. The number of taxonomic publications introducing the new names varied little around a long-term average of 70 per year, with ca 420 new names published per year during the 30-year period 1981–2010. Within this stable pattern there were steady increases in multi-authored and 'Smith in Jones and Smith' names, and a decline in publication of names in entomology journals and books. For taxonomic works published in Australia, a publications peak around 1990 reflected increases in museum, scientific society and government agency publishing, but a subsequent decline is largely explained by a steep drop in the number of papers on insect taxonomy published by Australia's national science agency, CSIRO.

  13. New journals for publishing medical case reports.

    Science.gov (United States)

    Akers, Katherine G

    2016-04-01

    Because they do not rank highly in the hierarchy of evidence and are not frequently cited, case reports describing the clinical circumstances of single patients are seldom published by medical journals. However, many clinicians argue that case reports have significant educational value, advance medical knowledge, and complement evidence-based medicine. Over the last several years, a vast number (∼160) of new peer-reviewed journals have emerged that focus on publishing case reports. These journals are typically open access and have relatively high acceptance rates. However, approximately half of the publishers of case reports journals engage in questionable or "predatory" publishing practices. Authors of case reports may benefit from greater awareness of these new publication venues as well as an ability to discriminate between reputable and non-reputable journal publishers.

  14. Open Access, data capitalism and academic publishing.

    Science.gov (United States)

    Hagner, Michael

    2018-02-16

    Open Access (OA) is widely considered a breakthrough in the history of academic publishing, rendering the knowledge produced by the worldwide scientific community accessible to all. In numerous countries, national governments, funding institutions and research organisations have undertaken enormous efforts to establish OA as the new publishing standard. The benefits and new perspectives, however, cause various challenges. This essay addresses several issues, including that OA is deeply embedded in the logic and practices of data capitalism. Given that OA has proven an attractive business model for commercial publishers, the key predictions of OA-advocates, namely that OA would liberate both scientists and tax payers from the chains of global publishing companies, have not become true. In its conclusion, the paper discusses the opportunities and pitfalls of non-commercial publishing.

  15. Exploring Digital News Publishing Business Models

    DEFF Research Database (Denmark)

    Lindskow, Kasper

    News publishers in the industrialized world are experiencing a fundamental challenge to their business models because of the changing modes of consumption, competition, and production of their offerings that are associated with the emergence of the networked information society. The erosion...... of the traditional business models poses an existential threat to news publishing and has given rise to a continuing struggle among news publishers to design digital business models that will be sustainable in the future. This dissertation argues that a central and underresearched aspect of digital news publishing...... business models concerns the production networks that support the co-production of digital news offerings. To fill this knowledge gap, this dissertation explores the strategic design of the digital news publishing production networks that are associated with HTML-based news offerings on the open Web...

  16. 77 FR 70176 - Previous Participation Certification

    Science.gov (United States)

    2012-11-23

    ... participants' previous participation in government programs and ensure that the past record is acceptable prior... information is designed to be 100 percent automated and digital submission of all data and certifications is... government programs and ensure that the past record is acceptable prior to granting approval to participate...

  17. On the Tengiz petroleum deposit previous study

    International Nuclear Information System (INIS)

    Nysangaliev, A.N.; Kuspangaliev, T.K.

    1997-01-01

    Tengiz petroleum deposit previous study is described. Some consideration about structure of productive formation, specific characteristic properties of petroleum-bearing collectors are presented. Recommendation on their detail study and using of experience on exploration and development of petroleum deposit which have analogy on most important geological and industrial parameters are given. (author)

  18. Subsequent pregnancy outcome after previous foetal death

    NARCIS (Netherlands)

    Nijkamp, J. W.; Korteweg, F. J.; Holm, J. P.; Timmer, A.; Erwich, J. J. H. M.; van Pampus, M. G.

    Objective: A history of foetal death is a risk factor for complications and foetal death in subsequent pregnancies as most previous risk factors remain present and an underlying cause of death may recur. The purpose of this study was to evaluate subsequent pregnancy outcome after foetal death and to

  19. The Data Issue: Opportunities and Challenges for Scientific Publishers

    Science.gov (United States)

    Murphy, F.; Irving, D. H.

    2011-12-01

    Using the recent report for the 'Opportunities in Data Exchange' Project produced by - and for - researchers, libraries/data centres and publishers (and which is based on a broad range of studies, questionnaires and evidence) we have defined current practices and expectations, and the gaps and dilemmas involved in producing data and datasets, and then analysed their relationship to formal publications. As a result, we identified potential opportunities to evolve scientific insights to be more useful and re-useful: with consequent implications for custodianship and long-term data management. We also defined a number of key incentives and barriers towards achieving these objectives. As a case study, the earth and environmental sciences have come under particularly close scrutiny with respect to data-ownership and -sharing arrangements, sometimes with damaging results to the discipline's reputation. These issues, along with considerable technological challenges, have to be handled effectively in order to best support all the users along the data chain. To that end, we show that key stakeholders - among them scientific publishers - need to have a clear idea of how to progress data-intensive derived information, which we demonstrate is often not the case. Towards bridging this knowledge gap, we have compiled a roadmap of next steps and key issues to be acknowledged and addressed by the scientific publishing community. These include: engaging directly with researchers, policy-makers, funding bodies and direct competitors to build innovative partnerships and enhance impact; providing technological and training investment and developing alongside the emerging discipline of 'data scientist': the 'data publisher'. This individual/company will need to combine a close understanding of researchers' priorities, together with market, legal and technical opportunities and restrictions.

  20. Critical analysis of marketing in Croatian publishing

    Directory of Open Access Journals (Sweden)

    Silvija Gašparić

    2018-03-01

    Full Text Available Marketing is an inevitable part of today's modern lifestyle. The role that marketing plays is so big that it has become the most important part of business. Due to crisis that is still affecting publishers in Croatia, this paper emphasizes the power of advertising as a key ingredient in how to overcome this situation and upgrade the system of publishing in Croatia. The framework of the paper is based on marketing as a tool that leads to popularization of books and sales increase. Beside the experimental part which gives an insight into public's opinion about books, publishing and marketing, the first chapter gives the literature review and analysis conducted on the whole process of book publishing in Croatia with pointing out mistakes that Croatian publishers make. Also, benefits of foreign publishing will be mentioned and used for comparison and projection on to the problems of the native market. The aim of this analysis and this viewpoint paper is to contribute the comprehension of marketing strategies and activities and its use and gains in Croatian publishing.

  1. Associating uncertainty with datasets using Linked Data and allowing propagation via provenance chains

    Science.gov (United States)

    Car, Nicholas; Cox, Simon; Fitch, Peter

    2015-04-01

    With earth-science datasets increasingly being published to enable re-use in projects disassociated from the original data acquisition or generation, there is an urgent need for associated metadata to be connected, in order to guide their application. In particular, provenance traces should support the evaluation of data quality and reliability. However, while standards for describing provenance are emerging (e.g. PROV-O), these do not include the necessary statistical descriptors and confidence assessments. UncertML has a mature conceptual model that may be used to record uncertainty metadata. However, by itself UncertML does not support the representation of uncertainty of multi-part datasets, and provides no direct way of associating the uncertainty information - metadata in relation to a dataset - with dataset objects.We present a method to address both these issues by combining UncertML with PROV-O, and delivering resulting uncertainty-enriched provenance traces through the Linked Data API. UncertProv extends the PROV-O provenance ontology with an RDF formulation of the UncertML conceptual model elements, adds further elements to support uncertainty representation without a conceptual model and the integration of UncertML through links to documents. The Linked ID API provides a systematic way of navigating from dataset objects to their UncertProv metadata and back again. The Linked Data API's 'views' capability enables access to UncertML and non-UncertML uncertainty metadata representations for a dataset. With this approach, it is possible to access and navigate the uncertainty metadata associated with a published dataset using standard semantic web tools, such as SPARQL queries. Where the uncertainty data follows the UncertML model it can be automatically interpreted and may also support automatic uncertainty propagation . Repositories wishing to enable uncertainty propagation for all datasets must ensure that all elements that are associated with uncertainty

  2. Analysis of Naïve Bayes Algorithm for Email Spam Filtering across Multiple Datasets

    Science.gov (United States)

    Fitriah Rusland, Nurul; Wahid, Norfaradilla; Kasim, Shahreen; Hafit, Hanayanti

    2017-08-01

    E-mail spam continues to become a problem on the Internet. Spammed e-mail may contain many copies of the same message, commercial advertisement or other irrelevant posts like pornographic content. In previous research, different filtering techniques are used to detect these e-mails such as using Random Forest, Naïve Bayesian, Support Vector Machine (SVM) and Neutral Network. In this research, we test Naïve Bayes algorithm for e-mail spam filtering on two datasets and test its performance, i.e., Spam Data and SPAMBASE datasets [8]. The performance of the datasets is evaluated based on their accuracy, recall, precision and F-measure. Our research use WEKA tool for the evaluation of Naïve Bayes algorithm for e-mail spam filtering on both datasets. The result shows that the type of email and the number of instances of the dataset has an influence towards the performance of Naïve Bayes.

  3. Navigating the heavy seas of online publishing

    DEFF Research Database (Denmark)

    Carpentier, Samuel; Dörry, Sabine; Lord, Sébastien

    2015-01-01

    Articulo – Journal of Urban Research celebrates its 10th anniversary! To celebrate this milestone, the current editors discuss the numerous changes and challenges related to publishing a peer-reviewed online journal. Since 2005, Articulo has progressively become more international, more professio......Articulo – Journal of Urban Research celebrates its 10th anniversary! To celebrate this milestone, the current editors discuss the numerous changes and challenges related to publishing a peer-reviewed online journal. Since 2005, Articulo has progressively become more international, more...... rough seas of online publishing in the future....

  4. Preparing and Publishing a Scientific Manuscript

    Directory of Open Access Journals (Sweden)

    Padma R Jirge

    2017-01-01

    Full Text Available Publishing original research in a peer-reviewed and indexed journal is an important milestone for a scientist or a clinician. It is an important parameter to assess academic achievements. However, technical and language barriers may prevent many enthusiasts from ever publishing. This review highlights the important preparatory steps for creating a good manuscript and the most widely used IMRaD (Introduction, Materials and Methods, Results, and Discussion method for writing a good manuscript. It also provides a brief overview of the submission and review process of a manuscript for publishing in a biomedical journal.

  5. [SciELO: method for electronic publishing].

    Science.gov (United States)

    Laerte Packer, A; Rocha Biojone, M; Antonio, I; Mayumi Takemaka, R; Pedroso García, A; Costa da Silva, A; Toshiyuki Murasaki, R; Mylek, C; Carvalho Reisl, O; Rocha F Delbucio, H C

    2001-01-01

    It describes the SciELO Methodology Scientific Electronic Library Online for electronic publishing of scientific periodicals, examining issues such as the transition from traditional printed publication to electronic publishing, the scientific communication process, the principles which founded the methodology development, its application in the building of the SciELO site, its modules and components, the tools use for its construction etc. The article also discusses the potentialities and trends for the area in Brazil and Latin America, pointing out questions and proposals which should be investigated and solved by the methodology. It concludes that the SciELO Methodology is an efficient, flexible and wide solution for the scientific electronic publishing.

  6. Open Access Publishing in the Electronic Age.

    Science.gov (United States)

    Kovács, Gábor L

    2014-10-01

    The principle of open-access (OA) publishing is more and more prevalent also on the field of laboratory medicine. Open-access journals (OAJs) are available online to the reader usually without financial, legal, or technical barriers. Some are subsidized, and some require payment on behalf of the author. OAJs are one of the two general methods for providing OA. The other one is self-archiving in a repository. The electronic journal of the IFCC (eJIFCC) is a platinum OAJ- i.e. there is no charge to read, or to submit to this journal. Traditionally, the author was required to transfer the copyright to the journal publisher. Publishers claimed this was necessary in order to protect author's rights. However, many authors found this unsatisfactory, and have used their influence to affect a gradual move towards a license to publish instead. Under such a system, the publisher has permission to edit, print, and distribute the article commercially, but the author(s) retain the other rights themselves. An OA mandate is a policy adopted by a research institution, research funder, or government which requires researchers to make their published, peer-reviewed journal articles and conference papers OA by self-archiving their peer-reviewed drafts in a repository ("green OA") or by publishing them in an OAJ ("gold OA"). Creative Commons (CC) is a nonprofit organization that enables the sharing and use of creativity and knowledge through free legal tools. The free, easy-to-use copyright licenses provide a simple, standardized way to give the public permission to share and use creative work. CC licenses let you easily change your copyright terms from the default of "all rights reserved" to "some rights reserved." OA publishing also raises a number of new ethical problems (e.g. predatory publishers, fake papers). Laboratory scientists are encouraged to publish their scientific results OA (especially in eJIFCC). They should, however, be aware of their rights, institutional mandate

  7. A dataset from bottom trawl survey around Taiwan

    Directory of Open Access Journals (Sweden)

    Kwang-tsao Shao

    2012-05-01

    Full Text Available Bottom trawl fishery is one of the most important coastal fisheries in Taiwan both in production and economic values. However, its annual production started to decline due to overfishing since the 1980s. Its bycatch problem also damages the fishery resource seriously. Thus, the government banned the bottom fishery within 3 nautical miles along the shoreline in 1989. To evaluate the effectiveness of this policy, a four year survey was conducted from 2000–2003, in the waters around Taiwan and Penghu (Pescadore Islands, one region each year respectively. All fish specimens collected from trawling were brought back to lab for identification, individual number count and body weight measurement. These raw data have been integrated and established in Taiwan Fish Database (http://fishdb.sinica.edu.tw. They have also been published through TaiBIF (http://taibif.tw, FishBase and GBIF (website see below. This dataset contains 631 fish species and 3,529 records, making it the most complete demersal fish fauna and their temporal and spatial distributional data on the soft marine habitat in Taiwan.

  8. The Amateurs' Love Affair with Large Datasets

    Science.gov (United States)

    Price, Aaron; Jacoby, S. H.; Henden, A.

    2006-12-01

    Amateur astronomers are professionals in other areas. They bring expertise from such varied and technical careers as computer science, mathematics, engineering, and marketing. These skills, coupled with an enthusiasm for astronomy, can be used to help manage the large data sets coming online in the next decade. We will show specific examples where teams of amateurs have been involved in mining large, online data sets and have authored and published their own papers in peer-reviewed astronomical journals. Using the proposed LSST database as an example, we will outline a framework for involving amateurs in data analysis and education with large astronomical surveys.

  9. Subsequent childbirth after a previous traumatic birth.

    Science.gov (United States)

    Beck, Cheryl Tatano; Watson, Sue

    2010-01-01

    Nine percent of new mothers in the United States who participated in the Listening to Mothers II Postpartum Survey screened positive for meeting the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for posttraumatic stress disorder after childbirth. Women who have had a traumatic birth experience report fewer subsequent children and a longer length of time before their second baby. Childbirth-related posttraumatic stress disorder impacts couples' physical relationship, communication, conflict, emotions, and bonding with their children. The purpose of this study was to describe the meaning of women's experiences of a subsequent childbirth after a previous traumatic birth. Phenomenology was the research design used. An international sample of 35 women participated in this Internet study. Women were asked, "Please describe in as much detail as you can remember your subsequent pregnancy, labor, and delivery following your previous traumatic birth." Colaizzi's phenomenological data analysis approach was used to analyze the stories of the 35 women. Data analysis yielded four themes: (a) riding the turbulent wave of panic during pregnancy; (b) strategizing: attempts to reclaim their body and complete the journey to motherhood; (c) bringing reverence to the birthing process and empowering women; and (d) still elusive: the longed-for healing birth experience. Subsequent childbirth after a previous birth trauma has the potential to either heal or retraumatize women. During pregnancy, women need permission and encouragement to grieve their prior traumatic births to help remove the burden of their invisible pain.

  10. 18th International Conference on Electronic Publishing

    CERN Document Server

    Dobreva, Milena

    2014-01-01

    The ways in which research data is used and handled continue to capture public attention and are the focus of increasing interest. Electronic publishing is intrinsic to digital data management, and relevant to the fields of data mining, digital publishing and social networks, with their implications for scholarly communication, information services, e-learning, e-business and the cultural heritage sector. This book presents the proceedings of the 18th International Conference on Electronic Publishing (ELPUB), held in Thessaloniki, Greece, in June 2014. The conference brings together researchers and practitioners to discuss the many aspects of electronic publishing, and the theme this year is 'Let's put data to use: digital scholarship for the next generation'. As well as examining the role of cultural heritage and service organisations in the creation, accessibility, duration and long-term preservation of data, it provides a discussion forum for the appraisal, citation and licensing of research data and the n...

  11. Predatory publishing and cybercrime targeting academics.

    Science.gov (United States)

    Umlauf, Mary Grace; Mochizuki, Yuki

    2018-04-01

    The purpose of this report is to inform and warn academics about practices used by cybercriminals who seek to profit from unwary scholars and undermine the industry of science. This report describes the signs, symptoms, characteristics, and consequences of predatory publishing and related forms of consumer fraud. Methods to curb these cybercrimes include educating scholars and students about tactics used by predatory publishers; institutional changes in how faculty are evaluated using publications; soliciting cooperation from the industries that support academic publishing and indexing to curb incorporation of illegitimate journals; and taking an offensive position by reporting these consumer fraud crimes to the authorities. Over and above the problem of publishing good science in fraudulent journals, disseminating and citing poor-quality research threaten the credibility of science and of nursing. © 2018 John Wiley & Sons Australia, Ltd.

  12. Design Options for a Desktop Publishing Course.

    Science.gov (United States)

    Mayer, Kenneth R.; Nelson, Sandra J.

    1992-01-01

    Offers recommendations for development of an undergraduate desktop publishing course. Discusses scholastic level and prerequisites, purpose and objectives, instructional resources and methodology, assignments and evaluation, and a general course outline. (SR)

  13. Open Access publishing in physics gains momentum

    CERN Multimedia

    2006-01-01

    The first meeting of European particle physics funding agencies took place on 3 November at CERN to establish a consortium for Open Access publishing in particle physics, SCOAP3 (Sponsoring Consortium for Open Access Publishing in Particle Physics). Open Access could transform the academic publishing world, with a great impact on research. The traditional model of research publication is funded through reader subscriptions. Open Access will turn this model on its head by changing the funding structure of research results, without increasing the overall cost of publishing. Instead of demanding payment from readers, publications will be distributed free of charge, financed by funding agencies via laboratories and the authors. This new concept will bring greater benefits and broaden opportunities for researchers and funding agencies by providing unrestricted distribution of the results of publicly funded research. The meeting marked a positive step forward, with international support from laboratories, fundin...

  14. INNOVATION MANAGEMENT TOOLS IN PUBLISHING COMPANIES

    Directory of Open Access Journals (Sweden)

    A. Shegda

    2013-09-01

    Full Text Available This article is devoted to the highly topical issue of modern publishing business as innovation management. introduction of technological innovation, measured as a promising strategy for the development of a constructive industry. The paper deals with main problems in managing of publishing companies. The reference consider of innovation management tools. In the article are exams the problems of books trend decline which require publishers introducing innovative methods of production and distribution. It was found that while the tools can be used. The process of innovation management with the following basic tools like as marketing innovation bench marketing, franchising, engineering innovation. It was found that while the tools can be used. So, the aim of the article is to analyze the modern tools of innovation management in the publishing field.

  15. Monitoring Information By Industry - Printing and Publishing

    Science.gov (United States)

    Stationary source emissions monitoring is required to demonstrate that a source is meeting the requirements in Federal or state rules. This page is about control techniques used to reduce pollutant emissions in the printing and publishing industry.

  16. Printing and Publishing Industry Training Board

    Science.gov (United States)

    Industrial Training International, 1974

    1974-01-01

    Accounted is the supervisory training program currently in operation in the printing and publishing industry. The purpose of the training program is to increase managerial efficiency and to better prepare new supervisors. (DS)

  17. NSA Diana Wueger Published in Washington Quarterly

    OpenAIRE

    Grant, Catherine L.

    2016-01-01

    National Security Affairs (NSA) News NSA Faculty Associate for Research Diana Wueger has recently had an article titled “India’s Nuclear-Armed Submarines: Deterrence or Danger?” published in the Washington Quarterly.

  18. The Dataset of Countries at Risk of Electoral Violence

    OpenAIRE

    Birch, Sarah; Muchlinski, David

    2017-01-01

    Electoral violence is increasingly affecting elections around the world, yet researchers have been limited by a paucity of granular data on this phenomenon. This paper introduces and describes a new dataset of electoral violence – the Dataset of Countries at Risk of Electoral Violence (CREV) – that provides measures of 10 different types of electoral violence across 642 elections held around the globe between 1995 and 2013. The paper provides a detailed account of how and why the dataset was ...

  19. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  20. Electronic astronomical information handling and flexible publishing.

    Science.gov (United States)

    Heck, A.

    The current dramatic evolution in information technology is bringing major modifications in the way scientists work and communicate. The concept of electronic information handling encompasses the diverse types of information, the different media, as well as the various communication methodologies and technologies. It ranges from the very collection of data until the final publication of results and sharing of knowledge. New problems and challenges result also from the new information culture, especially on legal, ethical, and educational grounds. Electronic publishing will have to diverge from an electronic version of contributions on paper and will be part of a more general flexible-publishing policy. The benefits of private publishing are questioned. The procedures for validating published material and for evaluating scientific activities will have to be adjusted too. Provision of electronic refereed information independently from commercial publishers in now feasible. Scientists and scientific institutions have now the possibility to run an efficient information server with validated (refereed) material without the help of a commercial publishers.

  1. Electronic publishing and Acupuncture in Medicine.

    Science.gov (United States)

    White, Adrian

    2006-09-01

    The internet has fundamentally altered scientific publishing; this article discusses current models and how they affect this journal. The greatest innovation is a new range of open access journals published only on the internet, aimed at rapid publication and universal access. In most cases authors pay a publication charge for the overhead costs of the journal. Journals that are published by professional organisations primarily for their members have some functions other than publishing research, including clinical articles, conference reports and news items. A small number of these journals are permitting open access to their research reports. Commercial science publishing still exists, where profit for shareholders provides motivation in addition to the desire to spread knowledge for the benefit of all. A range of electronic databases now exists that offer various levels of listing and searching. Some databases provide direct links to journal articles, such as the LinkOut scheme in PubMed. Acupuncture in Medicine will continue to publish in paper format; all research articles will be available on open access, but non-subscribers will need to pay for certain other articles for the first 12 months after publication. All Acupuncture in Medicine articles will in future be included in the LinkOut scheme, and be presented to the databases electronically.

  2. Public Availability to ECS Collected Datasets

    Science.gov (United States)

    Henderson, J. F.; Warnken, R.; McLean, S. J.; Lim, E.; Varner, J. D.

    2013-12-01

    Coastal nations have spent considerable resources exploring the limits of their extended continental shelf (ECS) beyond 200 nm. Although these studies are funded to fulfill requirements of the UN Convention on the Law of the Sea, the investments are producing new data sets in frontier areas of Earth's oceans that will be used to understand, explore, and manage the seafloor and sub-seafloor for decades to come. Although many of these datasets are considered proprietary until a nation's potential ECS has become 'final and binding' an increasing amount of data are being released and utilized by the public. Data sets include multibeam, seismic reflection/refraction, bottom sampling, and geophysical data. The U.S. ECS Project, a multi-agency collaboration whose mission is to establish the full extent of the continental shelf of the United States consistent with international law, relies heavily on data and accurate, standard metadata. The United States has made it a priority to make available to the public all data collected with ECS-funding as quickly as possible. The National Oceanic and Atmospheric Administration's (NOAA) National Geophysical Data Center (NGDC) supports this objective by partnering with academia and other federal government mapping agencies to archive, inventory, and deliver marine mapping data in a coordinated, consistent manner. This includes ensuring quality, standard metadata and developing and maintaining data delivery capabilities built on modern digital data archives. Other countries, such as Ireland, have submitted their ECS data for public availability and many others have made pledges to participate in the future. The data services provided by NGDC support the U.S. ECS effort as well as many developing nation's ECS effort through the U.N. Environmental Program. Modern discovery, visualization, and delivery of scientific data and derived products that span national and international sources of data ensure the greatest re-use of data and

  3. BIA Indian Lands Dataset (Indian Lands of the United States)

    Data.gov (United States)

    Federal Geographic Data Committee — The American Indian Reservations / Federally Recognized Tribal Entities dataset depicts feature location, selected demographics and other associated data for the 561...

  4. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  5. Socioeconomic Data and Applications Center (SEDAC) Treaty Status Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Socioeconomic Data and Application Center (SEDAC) Treaty Status Dataset contains comprehensive treaty information for multilateral environmental agreements,...

  6. Handling limited datasets with neural networks in medical applications: A small-data approach.

    Science.gov (United States)

    Shaikhina, Torgyn; Khovanova, Natalia A

    2017-01-01

    Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Using PIDs to Support the Full Research Data Publishing Lifecycle

    Science.gov (United States)

    Waard, A. D.

    2016-12-01

    Persistent identifiers can help support scientific research, track scientific impact and let researchers achieve recognition for their work. We discuss a number of ways in which Elsevier utilizes PIDs to support the scholarly lifecycle: To improve the process of storing and sharing data, Mendeley Data (http://data.mendeley.com) makes use of persistent identifiers to support the dynamic nature of data and software, by tracking and recording the provenance and versioning of datasets. This system now allows the comparison of different versions of a dataset, to see precisely what was changed during a versioning update. To present research data in context for the reader, we include PIDs in research articles as hyperlinks: https://www.elsevier.com/books-and-journals/content-innovation/data-base-linking. In some cases, PIDs fetch data files from the repositories provide that allow the embedding of visualizations, e.g. with PANGAEA and PubChem: https://www.elsevier.com/books-and-journals/content-innovation/protein-viewer; https://www.elsevier.com/books-and-journals/content-innovation/pubchem. To normalize referenced data elements, the Resource Identification Initiative - which we developed together with members of the Force11 RRID group - introduces a unified standard for resource identifiers (RRIDs) that can easily be interpreted by both humans and text mining tools. https://www.force11.org/group/resource-identification-initiative/update-resource-identification-initiative, as can be seen in our Antibody Data app: https://www.elsevier.com/books-and-journals/content-innovation/antibody-data To enable better citation practices and support robust metrics system for sharing research data, we have helped develop, and are early adopters of the Force11 Data Citation Principles and Implementation groups (https://www.force11.org/group/dcip) Lastly, through our work with the Research Data Alliance Publishing Data Services group, we helped create a set of guidelines (http

  8. Analysis of thirteen predatory publishers: a trap for eager-to-publish researchers.

    Science.gov (United States)

    Bolshete, Pravin

    2018-01-01

    To demonstrate a strategy employed by predatory publishers to trap eager-to-publish authors or researchers into submitting their work. This was a case study of 13 potential, possible, or probable predatory scholarly open-access publishers with similar characteristics. Eleven publishers were included from Beall's list and two additional publishers were identified from a Google web search. Each publisher's site was visited and its content analyzed. Publishers publishing biomedical journals were further explored and additional data was collected regarding their volumes, details of publications and editorial-board members. Overall, the look and feel of all 13 publishers was similar including names of publishers, website addresses, homepage content, homepage images, list of journals and subject areas, as if they were copied and pasted. There were discrepancies in article-processing charges within the publishers. None of the publishers identified names in their contact details and primarily included only email addresses. Author instructions were similar across all 13 publishers. Most publishers listed journals of varied subject areas including biomedical journals (12 publishers) covering different geographic locations. Most biomedical journals published none or very few articles. The highest number of articles published by any single biomedical journal was 28. Several editorial-board members were listed across more than one journals, with one member listed 81 times in different 69 journals (i.e. twice in 12 journals). There was a strong reason to believe that predatory publishers may have several publication houses with different names under a single roof to trap authors from different geographic locations.

  9. Introducing a Web API for Dataset Submission into a NASA Earth Science Data Center

    Science.gov (United States)

    Moroni, D. F.; Quach, N.; Francis-Curley, W.

    2016-12-01

    As the landscape of data becomes increasingly more diverse in the domain of Earth Science, the challenges of managing and preserving data become more onerous and complex, particularly for data centers on fixed budgets and limited staff. Many solutions already exist to ease the cost burden for the downstream component of the data lifecycle, yet most archive centers are still racing to keep up with the influx of new data that still needs to find a quasi-permanent resting place. For instance, having well-defined metadata that is consistent across the entire data landscape provides for well-managed and preserved datasets throughout the latter end of the data lifecycle. Translators between different metadata dialects are already in operational use, and facilitate keeping older datasets relevant in today's world of rapidly evolving metadata standards. However, very little is done to address the first phase of the lifecycle, which deals with the entry of both data and the corresponding metadata into a system that is traditionally opaque and closed off to external data producers, thus resulting in a significant bottleneck to the dataset submission process. The ATRAC system was the NOAA NCEI's answer to this previously obfuscated barrier to scientists wishing to find a home for their climate data records, providing a web-based entry point to submit timely and accurate metadata and information about a very specific dataset. A couple of NASA's Distributed Active Archive Centers (DAACs) have implemented their own versions of a web-based dataset and metadata submission form including the ASDC and the ORNL DAAC. The Physical Oceanography DAAC is the most recent in the list of NASA-operated DAACs who have begun to offer their own web-based dataset and metadata submission services to data producers. What makes the PO.DAAC dataset and metadata submission service stand out from these pre-existing services is the option of utilizing both a web browser GUI and a RESTful API to

  10. Development of a browser application to foster research on linking climate and health datasets: Challenges and opportunities.

    Science.gov (United States)

    Hajat, Shakoor; Whitmore, Ceri; Sarran, Christophe; Haines, Andy; Golding, Brian; Gordon-Brown, Harriet; Kessel, Anthony; Fleming, Lora E

    2017-01-01

    Improved data linkages between diverse environment and health datasets have the potential to provide new insights into the health impacts of environmental exposures, including complex climate change processes. Initiatives that link and explore big data in the environment and health arenas are now being established. To encourage advances in this nascent field, this article documents the development of a web browser application to facilitate such future research, the challenges encountered to date, and how they were addressed. A 'storyboard approach' was used to aid the initial design and development of the application. The application followed a 3-tier architecture: a spatial database server for storing and querying data, server-side code for processing and running models, and client-side browser code for user interaction and for displaying data and results. The browser was validated by reproducing previously published results from a regression analysis of time-series datasets of daily mortality, air pollution and temperature in London. Data visualisation and analysis options of the application are presented. The main factors that shaped the development of the browser were: accessibility, open-source software, flexibility, efficiency, user-friendliness, licensing restrictions and data confidentiality, visualisation limitations, cost-effectiveness, and sustainability. Creating dedicated data and analysis resources, such as the one described here, will become an increasingly vital step in improving understanding of the complex interconnections between the environment and human health and wellbeing, whilst still ensuring appropriate confidentiality safeguards. The issues raised in this paper can inform the future development of similar tools by other researchers working in this field. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    J. Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the l...

  12. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Goldfarb, S.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project. A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please e...

  13. Electronic Publishing or Electronic Information Handling?

    Science.gov (United States)

    Heck, A.

    The current dramatic evolution in information technology is bringing major modifications in the way scientists communicate. The concept of 'electronic publishing' is too restrictive and has often different, sometimes conflicting, interpretations. It is thus giving way to the broader notion of 'electronic information handling' encompassing the diverse types of information, the different media, as well as the various communication methodologies and technologies. New problems and challenges result also from this new information culture, especially on legal, ethical, and educational grounds. The procedures for validating 'published material' and for evaluating scientific activities will have to be adjusted too. 'Fluid' information is becoming a common concept. Electronic publishing cannot be conceived without link to knowledge bases nor without intelligent information retrieval tools.

  14. Electronic publishing and intelligent information retrieval

    Science.gov (United States)

    Heck, A.

    1992-01-01

    Europeans are now taking steps to homogenize policies and standardize procedures in electronic publishing (EP) in astronomy and space sciences. This arose from an open meeting organized in Oct. 1991 at Strasbourg Observatory (France) and another business meeting held late Mar. 1992 with the major publishers and journal editors in astronomy and space sciences. The ultimate aim of EP might be considered as the so-called 'intelligent information retrieval' (IIR) or better named 'advanced information retrieval' (AIR), taking advantage of the fact that the material to be published appears at some stage in a machine-readable form. It is obvious that the combination of desktop and electronic publishing with networking and new structuring of knowledge bases will profoundly reshape not only our ways of publishing, but also our procedures of communicating and retrieving information. It should be noted that a world-wide survey among astronomers and space scientists carried out before the October 1991 colloquium on the various packages and machines used, indicated that TEX-related packages were already in majoritarian use in our community. It has also been stressed at each meeting that the European developments should be carried out in collaboration with what is done in the US (STELLAR project, for instance). American scientists and journal editors actually attended both meetings mentioned above. The paper will offer a review of the status of electronic publishing in astronomy and its possible contribution to advanced information retrieval in this field. It will also report on recent meetings such as the 'Astronomy from Large Databases-2 (ALD-2)' conference dealing with the latest developments in networking, in data, information, and knowledge bases, as well as in the related methodologies.

  15. Publishing high-quality climate data on the semantic web

    Science.gov (United States)

    Woolf, Andrew; Haller, Armin; Lefort, Laurent; Taylor, Kerry

    2013-04-01

    The effort over more than a decade to establish the semantic web [Berners-Lee et. al., 2001] has received a major boost in recent years through the Open Government movement. Governments around the world are seeking technical solutions to enable more open and transparent access to Public Sector Information (PSI) they hold. Existing technical protocols and data standards tend to be domain specific, and so limit the ability to publish and integrate data across domains (health, environment, statistics, education, etc.). The web provides a domain-neutral platform for information publishing, and has proven itself beyond expectations for publishing and linking human-readable electronic documents. Extending the web pattern to data (often called Web 3.0) offers enormous potential. The semantic web applies the basic web principles to data [Berners-Lee, 2006]: using URIs as identifiers (for data objects and real-world 'things', instead of documents) making the URIs actionable by providing useful information via HTTP using a common exchange standard (serialised RDF for data instead of HTML for documents) establishing typed links between information objects to enable linking and integration Leading examples of 'linked data' for publishing PSI may be found in both the UK (http://data.gov.uk/linked-data) and US (http://www.data.gov/page/semantic-web). The Bureau of Meteorology (BoM) is Australia's national meteorological agency, and has a new mandate to establish a national environmental information infrastructure (under the National Plan for Environmental Information, NPEI [BoM, 2012a]). While the initial approach is based on the existing best practice Spatial Data Infrastructure (SDI) architecture, linked-data is being explored as a technological alternative that shows great promise for the future. We report here the first trial of government linked-data in Australia under data.gov.au. In this initial pilot study, we have taken BoM's new high-quality reference surface

  16. Introduction to scientific publishing backgrounds, concepts, strategies

    CERN Document Server

    Öchsner, Andreas

    2013-01-01

    This book is a very concise introduction to the basic knowledge of scientific publishing. It  starts with the basics of writing a scientific paper, and recalls the different types of scientific documents. In gives an overview on the major scientific publishing companies and different business models. The book also introduces to abstracting and indexing services and how they can be used for the evaluation of science, scientists, and institutions. Last but not least, this short book faces the problem of plagiarism and publication ethics.

  17. Publishing activities improves undergraduate biology education.

    Science.gov (United States)

    Smith, Michelle K

    2018-06-01

    To improve undergraduate biology education, there is an urgent need for biology instructors to publish their innovative active-learning instructional materials in peer-reviewed journals. To do this, instructors can measure student knowledge about a variety of biology concepts, iteratively design activities, explore student learning outcomes and publish the results. Creating a set of well-vetted activities, searchable through a journal interface, saves other instructors time and encourages the use of active-learning instructional practices. For authors, these publications offer new opportunities to collaborate and can provide evidence of a commitment to using active-learning instructional techniques in the classroom.

  18. Publishing to become an 'ideal academic'

    DEFF Research Database (Denmark)

    Lund, Rebecca

    2012-01-01

    over a two-year period in a recently merged Finnish university. I focus specifically on how a translocal discourse of competitive performance measurement and standards of academic excellence are accomplished in the local construction of the “ideal academic” as a person who publishes articles in A level...... journals. While the construct is hard for anyone to live up to, it would seem to be more difficult for some people than for others. The current obsession with getting published in top journals place those women, who are heavily engaged in teaching activities and with responsibilities besides academic work...

  19. Advances in semantic authoring and publishing

    CERN Document Server

    Groza, T

    2012-01-01

    Dissemination can be seen as a communication process between scientists. Over the course of several publications, they expose and support their findings, while discussing stated claims. Such discourse structures are trapped within the content of the publications, thus making the semantics discoverable only by humans. In addition, the lack of advances in scientific publishing, where electronic publications are still used as simple projections of paper documents, combined with the current growth in the amount of scientific research being published, transforms the process of finding relevant lite

  20. Publish Subscribe Systems Design and Principles

    CERN Document Server

    Tarkoma, Sasu

    2012-01-01

    This book offers an unified treatment of the problems solved by publish/subscribe, how to design and implement the solutions In this book, the author provides an insight into the publish/subscribe technology including the design, implementation, and evaluation of new systems based on the technology.  The book also addresses the basic design patterns and solutions, and discusses their application in practical application scenarios. Furthermore, the author examines current standards and industry best practices as well as recent research proposals in the area. Finally, necessary content ma

  1. Scholarly publishing depends on peer reviewers.

    Science.gov (United States)

    Fernandez-Llimos, Fernando

    2018-01-01

    The peer-review crisis is posing a risk to the scholarly peer-reviewed journal system. Journals have to ask many potential peer reviewers to obtain a minimum acceptable number of peers accepting reviewing a manuscript. Several solutions have been suggested to overcome this shortage. From reimbursing for the job, to eliminating pre-publication reviews, one cannot predict which is more dangerous for the future of scholarly publishing. And, why not acknowledging their contribution to the final version of the article published? PubMed created two categories of contributors: authors [AU] and collaborators [IR]. Why not a third category for the peer-reviewer?

  2. Scholarly publishing depends on peer reviewers

    Directory of Open Access Journals (Sweden)

    Fernandez-Llimos F

    2018-03-01

    Full Text Available The peer-review crisis is posing a risk to the scholarly peer-reviewed journal system. Journals have to ask many potential peer reviewers to obtain a minimum acceptable number of peers accepting reviewing a manuscript. Several solutions have been suggested to overcome this shortage. From reimbursing for the job, to eliminating pre-publication reviews, one cannot predict which is more dangerous for the future of scholarly publishing. And, why not acknowledging their contribution to the final version of the article published? PubMed created two categories of contributors: authors [AU] and collaborators [IR]. Why not a third category for the peer-reviewer?

  3. An Analysis of the GTZAN Music Genre Dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    Most research in automatic music genre recognition has used the dataset assembled by Tzanetakis et al. in 2001. The composition and integrity of this dataset, however, has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine...

  4. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  5. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  6. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  7. Treatment planning constraints to avoid xerostomia in head and neck radiotherapy: an independent test of QUANTEC criteria using a prospectively collected dataset

    Science.gov (United States)

    Moiseenko, Vitali; Wu, Jonn; Hovan, Allan; Saleh, Ziad; Apte, Aditya; Deasy, Joseph O.; Harrow, Stephen; Rabuka, Carman; Muggli, Adam; Thompson, Anna

    2011-01-01

    Purpose The severe reduction of salivary function (xerostomia) is a common complication following radiation therapy for head and neck cancer. Consequently, guidelines to ensure adequate function based on parotid gland tolerance dose-volume parameters have been suggested by the QUANTEC group (1) and by Ortholan et al. (2). We perform a validation test of these guidelines against a prospectively collected dataset and compared to a previously published dataset. Method and Materials Whole-mouth stimulated salivary flow data from 66 head and neck cancer patients treated with radiotherapy at the British Columbia Cancer Agency (BCCA) were measured, and treatment planning data were abstracted. Flow measurements were collected from 50 patients at 3 months, and 60 patients at 12 month follow-up. Previously published data from a second institution (WUSTL) were used for comparison. A logistic model was used to describe the incidence of grade 4 xerostomia as a function of the mean dose of the spared parotid gland. The rate of correctly predicting the lack of xerostomia (negative predictive value, NPV) was computed for both the QUANTEC constraints and Ortholan et al. (2) recommendation to constrain the total volume of both glands receiving more than 40 Gy to less than 33%. Results Both data sets showed a rate of xerostomia xerostomia at 12 months after therapy, based on the least-irradiated gland, were D50=32.4 Gy and and γ=0.97. NPVs for QUANTEC guideline were 94% (BCCA data), 90% (WUSTL data). For Ortholan et al. (2) guideline NPVs were 85% (BCCA), and 86% (WUSTL). Conclusion This confirms that the QUANTEC guideline effectively avoids xerostomia, and this is somewhat more effective than constraints on the volume receiving more than 40 Gy. PMID:21640505

  8. Solving the challenges of data preprocessing, uploading, archiving, retrieval, analysis and visualization for large heterogeneous paleo- and rock magnetic datasets

    Science.gov (United States)

    Minnett, R.; Koppers, A. A.; Tauxe, L.; Constable, C.; Jarboe, N. A.

    2011-12-01

    The Magnetics Information Consortium (MagIC) provides an archive for the wealth of rock- and paleomagnetic data and interpretations from studies on natural and synthetic samples. As with many fields, most peer-reviewed paleo- and rock magnetic publications only include high level results. However, access to the raw data from which these results were derived is critical for compilation studies and when updating results based on new interpretation and analysis methods. MagIC provides a detailed metadata model with places for everything from raw measurements to their interpretations. Prior to MagIC, these raw data were extremely cumbersome to collect because they mostly existed in a lab's proprietary format on investigator's personal computers or undigitized in field notebooks. MagIC has developed a suite of offline and online tools to enable the paleomagnetic, rock magnetic, and affiliated scientific communities to easily contribute both their previously published data and data supporting an article undergoing peer-review, to retrieve well-annotated published interpretations and raw data, and to analyze and visualize large collections of published data online. Here we present the technology we chose (including VBA in Excel spreadsheets, Python libraries, FastCGI JSON webservices, Oracle procedures, and jQuery user interfaces) and how we implemented it in order to serve the scientific community as seamlessly as possible. These tools are now in use in labs worldwide, have helped archive many valuable legacy studies and datasets, and routinely enable new contributions to the MagIC Database (http://earthref.org/MAGIC/).

  9. Annotating spatio-temporal datasets for meaningful analysis in the Web

    Science.gov (United States)

    Stasch, Christoph; Pebesma, Edzer; Scheider, Simon

    2014-05-01

    More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.

  10. Validity and reliability of stillbirth data using linked self-reported and administrative datasets.

    Science.gov (United States)

    Hure, Alexis J; Chojenta, Catherine L; Powers, Jennifer R; Byles, Julie E; Loxton, Deborah

    2015-01-01

    A high rate of stillbirth was previously observed in the Australian Longitudinal Study of Women's Health (ALSWH). Our primary objective was to test the validity and reliability of self-reported stillbirth data linked to state-based administrative datasets. Self-reported data, collected as part of the ALSWH cohort born in 1973-1978, were linked to three administrative datasets for women in New South Wales, Australia (n = 4374): the Midwives Data Collection; Admitted Patient Data Collection; and Perinatal Death Review Database. Linkages were obtained from the Centre for Health Record Linkage for the period 1996-2009. True cases of stillbirth were defined by being consistently recorded in two or more independent data sources. Sensitivity, specificity, positive predictive value, negative predictive value, percent agreement, and kappa statistics were calculated for each dataset. Forty-nine women reported 53 stillbirths. No dataset was 100% accurate. The administrative datasets performed better than self-reported data, with high accuracy and agreement. Self-reported data showed high sensitivity (100%) but low specificity (30%), meaning women who had a stillbirth always reported it, but there was also over-reporting of stillbirths. About half of the misreported cases in the ALSWH were able to be removed by identifying inconsistencies in longitudinal data. Data linkage provides great opportunity to assess the validity and reliability of self-reported study data. Conversely, self-reported study data can help to resolve inconsistencies in administrative datasets. Quantifying the strengths and limitations of both self-reported and administrative data can improve epidemiological research, especially by guiding methods and interpretation of findings.

  11. [Trends of electronic publishing in medicine and life sciences].

    Science.gov (United States)

    Strelski-Waisman, Neta; Waisman, Dan

    2005-09-01

    Scientific publication in the electronic media is gaining popularity in academic libraries, research institutions and commercial organizations. The electronic journal may shorten the processes of writing and publication, decrease publication and distribution costs, and enable access from any location in the world. Electronic publications have unique advantages: it is possible to search them, to create hyperlinks to references and footnotes, as well as to information on the web and to include graphics and photographs at a very low cost. Audio, video and tri-dimensional images may also be included. Electronic publishing may also speed up review and publication processes and enable the writer to receive immediate feedback through the web. However, in spite of the advantages, there are certain points that must be considered: accessibility to previously published material is not guaranteed as databases are not always stable and coverage may change without notice. In addition, the price that commercial publishers charge for their services may be very high or be subject to the purchase of a packaged deal that may include unwanted databases. Many issues of copyright and the use of published material are not yet finalized. In this review we discuss the advantages and disadvantages of the electronic scientific publication, the feasibility of keeping appropriate quality and peer-review process, the stability and accessibility of databases managed by the publishers and the acceptance of the electronic format by scientists and clinicians.

  12. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  13. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  14. Hydrology Research with the North American Land Data Assimilation System (NLDAS) Datasets at the NASA GES DISC Using Giovanni

    Science.gov (United States)

    Mocko, David M.; Rui, Hualan; Acker, James G.

    2013-01-01

    The North American Land Data Assimilation System (NLDAS) is a collaboration project between NASA/GSFC, NOAA, Princeton Univ., and the Univ. of Washington. NLDAS has created a surface meteorology dataset using the best-available observations and reanalyses the backbone of this dataset is a gridded precipitation analysis from rain gauges. This dataset is used to drive four separate land-surface models (LSMs) to produce datasets of soil moisture, snow, runoff, and surface fluxes. NLDAS datasets are available hourly and extend from Jan 1979 to near real-time with a typical 4-day lag. The datasets are available at 1/8th-degree over CONUS and portions of Canada and Mexico from 25-53 North. The datasets have been extensively evaluated against observations, and are also used as part of a drought monitor. NLDAS datasets are available from the NASA GES DISC and can be accessed via ftp, GDS, Mirador, and Giovanni. GES DISC news articles were published showing figures from the heat wave of 2011, Hurricane Irene, Tropical Storm Lee, and the low-snow winter of 2011-2012. For this presentation, Giovanni-generated figures using NLDAS data from the derecho across the U.S. Midwest and Mid-Atlantic will be presented. Also, similar figures will be presented from the landfall of Hurricane Isaac and the before-and-after drought conditions of the path of the tropical moisture into the central states of the U.S. Updates on future products and datasets from the NLDAS project will also be introduced.

  15. Standardization of GIS datasets for emergency preparedness of NPPs

    International Nuclear Information System (INIS)

    Saindane, Shashank S.; Suri, M.M.K.; Otari, Anil; Pradeepkumar, K.S.

    2012-01-01

    Probability of a major nuclear accident which can lead to large scale release of radioactivity into environment is extremely small by the incorporation of safety systems and defence-in-depth philosophy. Nevertheless emergency preparedness for implementation of counter measures to reduce the consequences are required for all major nuclear facilities. Iodine prophylaxis, Sheltering, evacuation etc. are protective measures to be implemented for members of public in the unlikely event of any significant releases from nuclear facilities. Bhabha Atomic Research Centre has developed a GIS supported Nuclear Emergency Preparedness Program. Preparedness for Response to Nuclear emergencies needs geographical details of the affected locations specially Nuclear Power Plant Sites and nearby public domain. Geographical information system data sets which the planners are looking for will have appropriate details in order to take decision and mobilize the resources in time and follow the Standard Operating Procedures. Maps are 2-dimensional representations of our real world and GIS makes it possible to manipulate large amounts of geo-spatially referenced data and convert it into information. This has become an integral part of the nuclear emergency preparedness and response planning. This GIS datasets consisting of layers such as village settlements, roads, hospitals, police stations, shelters etc. is standardized and effectively used during the emergency. The paper focuses on the need of standardization of GIS datasets which in turn can be used as a tool to display and evaluate the impact of standoff distances and selected zones in community planning. It will also highlight the database specifications which will help in fast processing of data and analysis to derive useful and helpful information. GIS has the capability to store, manipulate, analyze and display the large amount of required spatial and tabular data. This study intends to carry out a proper response and preparedness

  16. Dataset of cocoa aspartic protease cleavage sites

    Directory of Open Access Journals (Sweden)

    Katharina Janek

    2016-09-01

    Full Text Available The data provide information in support of the research article, “The cleavage specificity of the aspartic protease of cocoa beans involved in the generation of the cocoa-specific aroma precursors” (Janek et al., 2016 [1]. Three different protein substrates were partially digested with the aspartic protease isolated from cocoa beans and commercial pepsin, respectively. The obtained peptide fragments were analyzed by matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF/TOF-MS/MS and identified using the MASCOT server. The N- and C-terminal ends of the peptide fragments were used to identify the corresponding in-vitro cleavage sites by comparison with the amino acid sequences of the substrate proteins. The same procedure was applied to identify the cleavage sites used by the cocoa aspartic protease during cocoa fermentation starting from the published amino acid sequences of oligopeptides isolated from fermented cocoa beans. Keywords: Aspartic protease, Cleavage sites, Cocoa, In-vitro proteolysis, Mass spectrometry, Peptides

  17. Educational Systems Design Implications of Electronic Publishing.

    Science.gov (United States)

    Romiszowski, Alexander J.

    1994-01-01

    Discussion of electronic publishing focuses on the four main purposes of media in general: communication, entertainment, motivation, and education. Highlights include electronic journals and books; hypertext; user control; computer graphics and animation; electronic games; virtual reality; multimedia; electronic performance support;…

  18. Hypertext Publishing and the Revitalization of Knowledge.

    Science.gov (United States)

    Louie, Steven; Rubeck, Robert F.

    1989-01-01

    Discusses the use of hypertext for publishing and other document control activities in higher education. Topics discussed include a model of hypertext, called GUIDE, that is used at the University of Arizona Medical School; the increase in the number of scholarly publications; courseware development by faculty; and artificial intelligence. (LRW)

  19. Open Access Publishing in Particle Physics

    CERN Document Server

    2007-01-01

    Particle Physics, often referred to as High Energy Physics (HEP), spearheaded the Open Access dissemination of scientific results with the mass mailing of preprints in the pre-Web era and with the launch of the arXiv preprint system at the dawn of the '90s. The HEP community is now ready for a further push to Open Access while retaining all the advantages of the peerreview system and, at the same time, bring the spiralling cost of journal subscriptions under control. I will present a plan for the conversion to Open Access of HEP peer-reviewed journals, through a consortium of HEP funding agencies, laboratories and libraries: SCOAP3 (Sponsoring Consortium for Open Access Publishing in Particle Physics). SCOAP3 will engage with scientific publishers towards building a sustainable model for Open Access publishing, which is as transparent as possible for HEP authors. The current system in which journals income comes from subscription fees is replaced with a scheme where SCOAP3 compensates publishers for the costs...

  20. Publishing Qualitative Research in Counseling Journals

    Science.gov (United States)

    Hunt, Brandon

    2011-01-01

    This article focuses on the essential elements to be included when developing a qualitative study and preparing the findings for publication. Using the sections typically found in a qualitative article, the author describes content relevant to each section, with additional suggestions for publishing qualitative research.

  1. Publisher Correction: Geometric constraints during epithelial jamming

    Science.gov (United States)

    Atia, Lior; Bi, Dapeng; Sharma, Yasha; Mitchel, Jennifer A.; Gweon, Bomi; Koehler, Stephan A.; DeCamp, Stephen J.; Lan, Bo; Kim, Jae Hun; Hirsch, Rebecca; Pegoraro, Adrian F.; Lee, Kyu Ha; Starr, Jacqueline R.; Weitz, David A.; Martin, Adam C.; Park, Jin-Ah; Butler, James P.; Fredberg, Jeffrey J.

    2018-06-01

    In the version of this Article originally published, the Supplementary Movies were linked to the wrong descriptions. These have now been corrected. Additionally, the authors would like to note that co-authors James P. Butler and Jeffrey J. Fredberg contributed equally to this Article; this change has now been made.

  2. Doing Publishable Research with Undergraduate Students

    Science.gov (United States)

    Fenn, Aju J.; Johnson, Daniel K. N.; Smith, Mark Griffin; Stimpert, J. L.

    2010-01-01

    Many economics majors write a senior thesis. Although this experience can be the pinnacle of their education, publication is not the common standard for undergraduates. The authors describe four approaches that have allowed students to get their work published: (1) identify a topic, such as competitive balance in sports, and have students work on…

  3. Desktop publishing: a useful tool for scientists.

    Science.gov (United States)

    Lindroth, J R; Cooper, G; Kent, R L

    1994-01-01

    Desktop publishing offers features that are not available in word processing programs. The process yields an impressive and professional-looking document that is legible and attractive. It is a simple but effective tool to enhance the quality and appearance of your work and perhaps also increase your productivity.

  4. Desktop Publishing as a Learning Resources Service.

    Science.gov (United States)

    Drake, David

    In late 1988, Midland College in Texas implemented a desktop publishing service to produce instructional aids and reduce and complement the workload of the campus print shop. The desktop service was placed in the Media Services Department of the Learning Resource Center (LRC) for three reasons: the LRC was already established as a campus-wide…

  5. Desktop Publishing: Things Gutenberg Never Taught You.

    Science.gov (United States)

    Bowman, Joel P.; Renshaw, Debbie A.

    1989-01-01

    Provides a desktop publishing (DTP) overview, including: advantages and disadvantages; hardware and software requirements; and future development. Discusses cost-effectiveness, confidentiality, credibility, effects on volume of paper-based communication, and the need for training in layout and design which DTP creates. Includes a glossary of DTP…

  6. Basics of Desktop Publishing. Teacher Edition.

    Science.gov (United States)

    Beeby, Ellen

    This color-coded teacher's guide contains curriculum materials designed to give students an awareness of various desktop publishing techniques before they determine their computer hardware and software needs. The guide contains six units, each of which includes some or all of the following basic components: objective sheet, suggested activities…

  7. Reconfiguration Service for Publish/Subscribe Middleware

    NARCIS (Netherlands)

    Zieba, Bogumil; Glandrup, Maurice; van Sinderen, Marten J.; Wegdam, M.

    2006-01-01

    Mission-critical, distributed systems are often designed as a set of distributed, components that interact using publish/subscribe middleware. Currently, in these systems, software components are usually statically allocated to the nodes to fulfil predictability, reliability requirements. However, a

  8. Awareness and Perceptions of Published Osteoporosis Clinical ...

    African Journals Online (AJOL)

    Awareness and Perceptions of Published Osteoporosis Clinical Guidelines-a Survey of Primary Care Practitioners in the Cape Town Metropolitan Area. ... Further attention needs to be focused on developing implementation and dissemination strategies of evidence-based guidelines in South Africa. South African Journal of ...

  9. Librarians and Libraries Supporting Open Access Publishing

    Science.gov (United States)

    Richard, Jennifer; Koufogiannakis, Denise; Ryan, Pam

    2009-01-01

    As new models of scholarly communication emerge, librarians and libraries have responded by developing and supporting new methods of storing and providing access to information and by creating new publishing support services. This article will examine the roles of libraries and librarians in developing and supporting open access publishing…

  10. 12 CFR 271.3 - Published information.

    Science.gov (United States)

    2010-01-01

    ... preceding year upon all matters of policy relating to open market operations, showing the reasons underlying... information relating to open market operations of the Federal Reserve Banks is published in the Federal... Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) FEDERAL OPEN MARKET COMMITTEE RULES REGARDING...

  11. Electronic Publishing and The American Astronomical Society

    Science.gov (United States)

    Milkey, R. W.

    1999-12-01

    Electronic Publishing has created, and will continue to create, new opportunities and challenges for representing scientific work in new media and formats. The AAS will position itself to take advantage of these, both for newly created works and for improved representation of works already published. It is the view of the AAS that we hold the works that we publish in trust for our community and are obligated to protect the integrity of these works and to assure that they continue to be available to the research community. Assignment of copyright to the AAS by the author plays a central role in the preservation of the integrity and accessability of the literature published by the American Astronomical Society. In return for such assignment the AAS allows the author to freely use the work for his/her own purpose and to control the grant of permission to third parties to use such materials. The AAS retains the right to republish the work in whatever format or medium, and to retain the rights after the author's death. Specific advantages to this approach include: Assurance of the continued availability of the materials to the research and educational communities; A guarantee of the intellectual integrity of the materials in the archive; Stimulation of the development of new means of presentation or of access to the archival literature; and Provision of a uniformity of treatment for copyright issues and to relieve the individual authors of much of the administrative work.

  12. Open access publishing in physics gains momentum

    CERN Multimedia

    2006-01-01

    "The first meeting of European particle physics funding agencies took place today at CERN to establish a consortium for Open Access publishing in particle physics, SCOAP3. This is the first time an antire scientific field is exploring the conversion of its reader-paid journals into an author-paid Open Access format." (1 page)

  13. [Medical publishing in Norway 1905-2005].

    Science.gov (United States)

    Nylenna, Magne; Larsen, Øivind

    2005-06-02

    The nation-building process in Norway took mainly place before the Norwegian-Swedish union came to a close in 1905. This was not a dramatic change, though the end of the union did bring a lift to Norwegian national consciousness. In 1905 there were three general medical journals in Norway and approximately 1200 doctors. German was the most important language of international science, but most scientific publishing was done in Norwegian. After the Second World War, English became the dominating language of scientific communication. Twentieth-century medicine and medical publishing was an era of specialisation and internationalisation. Norwegian medicine has to a large extent been internationalised through Nordic cooperation, with the Nordic specialist journals being of particular importance. With increasing professionalism in research, international English-language journals have become the major channels of communication, though several Norwegian-language journals (on paper or on the internet) have been established and are of crucial importance to a national identity within medical specialties. In 2005 there is only one general medical journal in Norwegian, in a country with approximately 20,000 doctors. A national identity related to medical publishing is not given much attention, though national medicine is still closely tied in with national culture. Good clinical practice should be based on a firm knowledge of local society and local tradition. This is a challenge in contemporary medical publishing.

  14. Humanists, Libraries, Electronic Publishing, and the Future.

    Science.gov (United States)

    Sweetland, James H.

    1992-01-01

    Discusses the impact of computerization on humanists and libraries. Highlights include a lack of relevant databases; a reliance on original text; vocabulary and language issues; lack of time pressure; research style; attitudes of humanists toward technology; trends in electronic publishing; hypertext; collection development; electronic mail;…

  15. Evolving Digital Publishing Opportunities across Composition Studies

    Science.gov (United States)

    Hawishler, Gail E.; Selfe, Cynthia L.

    2014-01-01

    In this article, the authors report since the early 1980s, the profession has seen plenty of changes in the arena of digital scholarly publishing: during this time, while the specific challenges have seldom remained the same, the presence and the pressures of rapid technological change endure. In fact, as an editorial team that has, in part,…

  16. Electronic Publishing in Library and Information Science.

    Science.gov (United States)

    Lee, Joel M.; And Others

    1988-01-01

    Discusses electronic publishing as it refers to machine-readable databases. Types of electronic products and services are described and related topics considered: (1) usage of library and information science databases; (2) production and distribution of databases; (3) trends and projections in the electronic information industry; and (4)…

  17. Publisher Correction: The price of fast fashion

    Science.gov (United States)

    2018-02-01

    In the version of this Editorial originally published, the rate of clothing disposal to landfill was incorrectly given as `one rubbish truck per day'; it should have read `one rubbish truck per second'. This has now been corrected in the online versions of the Editorial.

  18. Data Publishing - View from the Front

    Science.gov (United States)

    Carlson, David; Pfeiffenberger, Hans

    2014-05-01

    As data publishing journals - Earth System Science Data (ESSD, Copernicus, since 2009), Geophysical Data Journal (GDJ, Wiley, recent) and Scientific Data (SD, Nature Publishing Group, anticipated from May 2014) - expose data sets, implement data description and data review practices, and develop partnerships with data centres and data providers, we anticipate substantial benefits for the broad earth system and environmental research communities but also substantial challenges for all parties. A primary advantage emerges from open access to convergent data: subsurface hydrographic data near Antarctica, for example, now available for combination and comparison with nearby atmospheric data (both documented in ESSD), basin-scale precipitation data (accessed through GDJ) for comparison and interpolation with long-term global precipitation records (accessed from ESSD), or, imagining not too far into the future, stomach content and abundance data for European fish (from ESSD) linked to genetic or nutritional data (from SD). In addition to increased opportunity for discovery and collaboration, we also notice parallel developments of new tools for (published) data visualization and display and increasing acceptance of data publication as a useful and anticipated dissemination step included in project- and institution-based data management plans. All parties - providers, publishers and users - will benefit as various indexing services (SCI, SCOPUS, DCI etc.) acknowledge the creative, intellectual and meritorious efforts of data preparation and data provision. The challenges facing data publication, in most cases very familiar to the data community but made more acute by the advances in data publishing, include diverging metadata standards (among biomedical, green ocean modeling and meteorological communities, for example), adhering to standards and practices for permanent identification while also accommodating 'living' data, and maintaining prompt but rigorous review and

  19. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  20. [Electronic cigarettes - effects on health. Previous reports].

    Science.gov (United States)

    Napierała, Marta; Kulza, Maksymilian; Wachowiak, Anna; Jabłecka, Katarzyna; Florek, Ewa

    2014-01-01

    Currently very popular in the market of tobacco products have gained electronic cigarettes (ang. E-cigarettes). These products are considered to be potentially less harmful in compared to traditional tobacco products. However, current reports indicate that the statements of the producers regarding to the composition of the e- liquids not always are sufficient, and consumers often do not have reliable information on the quality of the product used by them. This paper contain a review of previous reports on the composition of e-cigarettes and their impact on health. Most of the observed health effects was related to symptoms of the respiratory tract, mouth, throat, neurological complications and sensory organs. Particularly hazardous effects of the e-cigarettes were: pneumonia, congestive heart failure, confusion, convulsions, hypotension, aspiration pneumonia, face second-degree burns, blindness, chest pain and rapid heartbeat. In the literature there is no information relating to passive exposure by the aerosols released during e-cigarette smoking. Furthermore, the information regarding to the use of these products in the long term are not also available.

  1. Experiences and lessons learned from creating a generalized workflow for data publication of field campaign datasets

    Science.gov (United States)

    Santhana Vannan, S. K.; Ramachandran, R.; Deb, D.; Beaty, T.; Wright, D.

    2017-12-01

    This paper summarizes the workflow challenges of curating and publishing data produced from disparate data sources and provides a generalized workflow solution to efficiently archive data generated by researchers. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics and the Global Hydrology Resource Center (GHRC) DAAC have been collaborating on the development of a generalized workflow solution to efficiently manage the data publication process. The generalized workflow presented here are built on lessons learned from implementations of the workflow system. Data publication consists of the following steps: Accepting the data package from the data providers, ensuring the full integrity of the data files. Identifying and addressing data quality issues Assembling standardized, detailed metadata and documentation, including file level details, processing methodology, and characteristics of data files Setting up data access mechanisms Setup of the data in data tools and services for improved data dissemination and user experience Registering the dataset in online search and discovery catalogues Preserving the data location through Digital Object Identifiers (DOI) We will describe the steps taken to automate, and realize efficiencies to the above process. The goals of the workflow system are to reduce the time taken to publish a dataset, to increase the quality of documentation and metadata, and to track individual datasets through the data curation process. Utilities developed to achieve these goal will be described. We will also share metrics driven value of the workflow system and discuss the future steps towards creation of a common software framework.

  2. Evaluating SPARQL queries on massive RDF datasets

    KAUST Repository

    Al-Harbi, Razen; Abdelaziz, Ibrahim; Kalnis, Panos; Mamoulis, Nikos

    2015-01-01

    In this paper, we propose AdHash, a distributed RDF system which addresses the shortcomings of previous work. First, AdHash initially applies lightweight hash partitioning, which drastically minimizes the startup cost, while favoring the parallel processing of join patterns on subjects, without any data communication. Using a locality-aware planner, queries that cannot be processed in parallel are evaluated with minimal communication. Second, AdHash monitors the data access patterns and adapts dynamically to the query load by incrementally redistributing and replicating frequently accessed data. As a result, the communication cost for future queries is drastically reduced or even eliminated. Our experiments with synthetic and real data verify that AdHash (i) starts faster than all existing systems, (ii) processes thousands of queries before other systems become online, and (iii) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in sub-seconds. In this demonstration, audience can use a graphical interface of AdHash to verify its performance superiority compared to state-of-the-art distributed RDF systems.

  3. Consortium Negotiations with Publishers - Past and Future

    Directory of Open Access Journals (Sweden)

    Pierre Carbone

    2007-09-01

    Full Text Available Since the mid nineties, with the development of online access to information (journals, databases, e-books, libraries strengthened their cooperation. They set up consortia at different levels around the world, generally with the support of the public authorities, for negotiating collectively with the publishers and information providers general agreements for access to these resources. This cooperation has been reinforced at the international level with the exchange of experiences and the debates in the ICOLC seminars and statements. So did the French consortium Couperin, which is now gathering more than 200 academic and research institutions. The level of access and downloading from these resources is growing with geometrical progression, and reaches a scale with no comparison to ILL or access to printed documents, but the costs did not reduce and the libraries budgets did not increase. At first, agreements with the major journal publishers were based on cross-access, and evolved rapidly to the access at a large bundle of titles in the so-called Big deal. After experiencing the advantages of the Big deal, the libraries are now more sensitive to the limits and lack of flexibility and to cost-effectiveness. These Big deals were based on a model where online access fee is built on the cost of print subscriptions, and the problem for the consortia and for the publishers is now to evolve from this print plus online model to an e-only model, no more based on the historical amount of the print subscriptions, to a new deal. In many European countries, VAT legislation is an obstacle to e-only, and this problem must be discussed at the European level. This change to e-only takes place at a moment where changes in the scientific publishing world are important (mergers of publishing houses, growth of research and of scientific publishing in the developing countries, open access and open archives movement. The transition to e-only leads also the library

  4. Publishing Landscape Archaeology in the Digital World

    Directory of Open Access Journals (Sweden)

    Howry Jeffrey C.

    2017-12-01

    Full Text Available The challenge of presenting micro- and macro-scale scale data in landscape archaeology studies is facilitated by a diversity of GIS technologies. Specific to scholarly research is the need to selectively share certain types of data with collaborators and academic researchers while also publishing general information in the public domain. This article presents a general model for scholarly online collaboration and teaching while providing examples of the kinds of landscape archaeology that can be published online. Specifically illustrated is WorldMap, an interactive mapping platform based upon open-source software which uses browsers built to open source standards. The various features of this platform allow tight user viewing control, views with URL referencing, commenting and certification of layers, as well as user annotation. Illustration of WorldMap features and its value for scholarly research and teaching is provided in the context of landscape archaeology studies.

  5. Springer Publishing Booth | 4-5 October

    CERN Multimedia

    2016-01-01

    In the spirit of continuation of the CERN Book Fairs of the past years, Springer Nature will be present with a book and journal booth on October 4th and 5th, located as usual in the foyer of the Main Building. Some of the latest titles in particle physics and related fields will be on sale.   You are cordially invited to come to the booth to meet Heike Klingebiel (Licensing Manager / Library Sales), Hisako Niko (Publishing Editor) and Christian Caron (Publishing Editor). In particular, information about the new Nano database – nanomaterial and device profiles from high-impact journals and patents, manually abstracted, curated and updated by nanotechnology experts – will be available. The database is accessible here: http://nano.nature.com/. 

  6. Publishing priorities of biomedical research funders

    Science.gov (United States)

    Collins, Ellen

    2013-01-01

    Objectives To understand the publishing priorities, especially in relation to open access, of 10 UK biomedical research funders. Design Semistructured interviews. Setting 10 UK biomedical research funders. Participants 12 employees with responsibility for research management at 10 UK biomedical research funders; a purposive sample to represent a range of backgrounds and organisation types. Conclusions Publicly funded and large biomedical research funders are committed to open access publishing and are pleased with recent developments which have stimulated growth in this area. Smaller charitable funders are supportive of the aims of open access, but are concerned about the practical implications for their budgets and their funded researchers. Across the board, biomedical research funders are turning their attention to other priorities for sharing research outputs, including data, protocols and negative results. Further work is required to understand how smaller funders, including charitable funders, can support open access. PMID:24154520

  7. Viability of Controlling Prosthetic Hand Utilizing Electroencephalograph (EEG) Dataset Signal

    Science.gov (United States)

    Miskon, Azizi; A/L Thanakodi, Suresh; Raihan Mazlan, Mohd; Mohd Haziq Azhar, Satria; Nooraya Mohd Tawil, Siti

    2016-11-01

    This project presents the development of an artificial hand controlled by Electroencephalograph (EEG) signal datasets for the prosthetic application. The EEG signal datasets were used as to improvise the way to control the prosthetic hand compared to the Electromyograph (EMG). The EMG has disadvantages to a person, who has not used the muscle for a long time and also to person with degenerative issues due to age factor. Thus, the EEG datasets found to be an alternative for EMG. The datasets used in this work were taken from Brain Computer Interface (BCI) Project. The datasets were already classified for open, close and combined movement operations. It served the purpose as an input to control the prosthetic hand by using an Interface system between Microsoft Visual Studio and Arduino. The obtained results reveal the prosthetic hand to be more efficient and faster in response to the EEG datasets with an additional LiPo (Lithium Polymer) battery attached to the prosthetic. Some limitations were also identified in terms of the hand movements, weight of the prosthetic, and the suggestions to improve were concluded in this paper. Overall, the objective of this paper were achieved when the prosthetic hand found to be feasible in operation utilizing the EEG datasets.

  8. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    Science.gov (United States)

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  9. Evaluating SPARQL queries on massive RDF datasets

    KAUST Repository

    Al-Harbi, Razen

    2015-08-01

    Distributed RDF systems partition data across multiple computer nodes. Partitioning is typically based on heuristics that minimize inter-node communication and it is performed in an initial, data pre-processing phase. Therefore, the resulting partitions are static and do not adapt to changes in the query workload; as a result, existing systems are unable to consistently avoid communication for queries that are not favored by the initial data partitioning. Furthermore, for very large RDF knowledge bases, the partitioning phase becomes prohibitively expensive, leading to high startup costs. In this paper, we propose AdHash, a distributed RDF system which addresses the shortcomings of previous work. First, AdHash initially applies lightweight hash partitioning, which drastically minimizes the startup cost, while favoring the parallel processing of join patterns on subjects, without any data communication. Using a locality-aware planner, queries that cannot be processed in parallel are evaluated with minimal communication. Second, AdHash monitors the data access patterns and adapts dynamically to the query load by incrementally redistributing and replicating frequently accessed data. As a result, the communication cost for future queries is drastically reduced or even eliminated. Our experiments with synthetic and real data verify that AdHash (i) starts faster than all existing systems, (ii) processes thousands of queries before other systems become online, and (iii) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in sub-seconds. In this demonstration, audience can use a graphical interface of AdHash to verify its performance superiority compared to state-of-the-art distributed RDF systems.

  10. The Open Data Repositorys Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  11. Publisher Correction: Local sourcing in astronomy

    Science.gov (United States)

    2018-06-01

    In the version of this Editorial originally published, we mistakenly wrote that `the NAOJ ... may decommission Subaru in favour of other priorities'. In fact, the National Astronomical Observatory of Japan is committed to the long-term operation of the Subaru telescope. In the corrected version that whole sentence has been replaced with: `It will be critical to maintain such smaller telescopes in the age of the ELTs.'

  12. Electronic pre-publishing for worldwide access

    International Nuclear Information System (INIS)

    Dallman, D.; Draper, M.; Schwarz, S.

    1994-01-01

    In High Energy Physics, as in other areas of research, paper preprints have traditionally been the primary method of communication, before publishing in a journal. Electronic bulletin boards (EBBs) are now taking over as the dominant medium. While fast and readily available EBBs do not constitute electronic journals as they bypass the referee system crucial for prestigious research journals, although this too may be achieved electronically in time. (UK)

  13. The Open Data Repository's Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.

    2015-12-01

    Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A

  14. Promising Products for Printing and Publishing Market

    Directory of Open Access Journals (Sweden)

    Renata Činčikaitė

    2011-04-01

    Full Text Available The article surveys printing and publishing market and its strong and weak aspects. The concept of a new product is described as well as its lifetime and the necessity of its introduction to the market. The enterprise X operating on the market is analyzed, its strong and weak characteristics are presented. The segmentation of the company consumers is performed. On the basis of the performed analysis the potential promising company products are defined.Article in Lithuanian

  15. Redressing the inverted pyramid of scientific publishing

    Science.gov (United States)

    Caux, Jean-Sébastien

    2017-11-01

    Scientific publishing is currently undergoing a progressively rapid transformation away from the traditional subscription model. With the Open Access movement in full swing, existing business practices and future plans are coming under increasing scrutiny, while new "big deals" are being made at breakneck speed. Scientists can rightfully ask themselves if all these changes are going the right way, and if not, what can be done about it.

  16. Open Access publishing in physics gains momentum

    CERN Multimedia

    2006-01-01

    "As if inventing the World-Wide Web were not revolutionary enough, the European Organisation for Nuclear Research (CERN) is now on its way to unleashing a paradigm shift in the world of academic publishing. For the first time ever, an entire scientific field is exploring the possibility of converting its reader-paid journals into an author-pai Open Access format." (1 page)

  17. The largest human cognitive performance dataset reveals insights into the effects of lifestyle factors and aging

    Directory of Open Access Journals (Sweden)

    Daniel A Sternberg

    2013-06-01

    Full Text Available Making new breakthroughs in understanding the processes underlying human cognition may depend on the availability of very large datasets that have not historically existed in psychology and neuroscience. Lumosity is a web-based cognitive training platform that has grown to include over 600 million cognitive training task results from over 35 million individuals, comprising the largest existing dataset of human cognitive performance. As part of the Human Cognition Project, Lumosity’s collaborative research program to understand the human mind, Lumos Labs researchers and external research collaborators have begun to explore this dataset in order uncover novel insights about the correlates of cognitive performance. This paper presents two preliminary demonstrations of some of the kinds of questions that can be examined with the dataset. The first example focuses on replicating known findings relating lifestyle factors to baseline cognitive performance in a demographically diverse, healthy population at a much larger scale than has previously been available. The second example examines a question that would likely be very difficult to study in laboratory-based and existing online experimental research approaches: specifically, how learning ability for different types of cognitive tasks changes with age. We hope that these examples will provoke the imagination of researchers who are interested in collaborating to answer fundamental questions about human cognitive performance.

  18. The BiPublishers ranking: Main results and methodological problems when constructing rankings of academic publishers

    Directory of Open Access Journals (Sweden)

    Torres-Salinas, Daniel

    2015-12-01

    Full Text Available We present the results of the Bibliometric Indicators for Publishers project (also known as BiPublishers. This project represents the first attempt to systematically develop bibliometric publisher rankings. The data for this project was derived from the Book Citation Index and the study time period was 2009-2013. We have developed 42 rankings: 4 by fields and 38 by disciplines. We display six indicators for publishers divided into three types: output, impact and publisher’s profile. The aim is to capture different characteristics of the research performance of publishers. 254 publishers were processed and classified according to publisher type: commercial publishers and university presses. We present the main publishers by field and then discuss the principal challenges presented when developing this type of tool. The BiPublishers ranking is an on-going project which aims to develop and explore new data sources and indicators to better capture and define the research impact of publishers.Presentamos los resultados del proyecto Bibliometric Indicators for Publishers (BiPublishers. Es el primer proyecto que desarrolla de manera sistemática rankings bibliométricos de editoriales. La fuente de datos empleada es el Book Citation Index y el periodo de análisis 2009-2013. Se presentan 42 rankings: 4 por áreas y 38 por disciplinas. Mostramos seis indicadores por editorial divididos según su tipología: producción, impacto y características editoriales. Se procesaron 254 editoriales y se clasificaron según el tipo: comerciales y universitarias. Se presentan las principales editoriales por áreas. Después, se discuten los principales retos a superar en el desarrollo de este tipo de herramientas. El ranking Bipublishers es un proyecto en desarrollo que persigue analizar y explorar nuevas fuentes de datos e indicadores para captar y definir el impacto de las editoriales académicas.

  19. panMetaDocs and DataSync - providing a convenient way to share and publish research data

    Science.gov (United States)

    Ulbricht, D.; Klump, J. F.

    2013-12-01

    In recent years research institutions, geological surveys and funding organizations started to build infrastructures to facilitate the re-use of research data from previous work. At present, several intermeshed activities are coordinated to make data systems of the earth sciences interoperable and recorded data discoverable. Driven by governmental authorities, ISO19115/19139 emerged as metadata standards for discovery of data and services. Established metadata transport protocols like OAI-PMH and OGC-CSW are used to disseminate metadata to data portals. With the persistent identifiers like DOI and IGSN research data and corresponding physical samples can be given unambiguous names and thus become citable. In summary, these activities focus primarily on 'ready to give away'-data, already stored in an institutional repository and described with appropriate metadata. Many datasets are not 'born' in this state but are produced in small and federated research projects. To make access and reuse of these 'small data' easier, these data should be centrally stored and version controlled from the very beginning of activities. We developed DataSync [1] as supplemental application to the panMetaDocs [2] data exchange platform as a data management tool for small science projects. DataSync is a JAVA-application that runs on a local computer and synchronizes directory trees into an eSciDoc-repository [3] by creating eSciDoc-objects via eSciDocs' REST API. DataSync can be installed on multiple computers and is in this way able to synchronize files of a research team over the internet. XML Metadata can be added as separate files that are managed together with data files as versioned eSciDoc-objects. A project-customized instance of panMetaDocs is provided to show a web-based overview of the previously uploaded file collection and to allow further annotation with metadata inside the eSciDoc-repository. PanMetaDocs is a PHP based web application to assist the creation of metadata in

  20. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Herr, J.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. The WLAP model is spreading. This summer, the CERN's High School Teachers program has used WLAP's system to record several physics lectures directed toward a broad audience. And a new project called MScribe, which is essentially the WLAP system coupled with an infrared tracking camera, is being used by the University of Michigan to record several University courses this academic year. All lectures can be viewed on any major platform with any common internet browser...