WorldWideScience

Sample records for monthly survey compiles

  1. METHODOLOGICAL PROPOSAL FOR COMPILING THE ILO UNEMPLOYMENT WITH MONTHLY PERIODICITY

    Directory of Open Access Journals (Sweden)

    Silvia PISICĂ

    2011-08-01

    Full Text Available Development of methodology for deriving the monthly unemployment statistics directly from the quarterly Labour Force Survey (LFS results by econometric modeling meets the requirements of insuring the information on short-term needed for employment policies, aiming to achieve the objectives of Europe 2020. Estimated monthly data series according to the methodology allow assessment of short-term trends in unemployment measured according to the criteria of the International Labour Organisation (ILO in terms of comparability with European statistics.

  2. A survey of compiler optimization techniques

    Science.gov (United States)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  3. Licensee Event Report (LER) compilation for month of March 1988

    Energy Technology Data Exchange (ETDEWEB)

    None

    1988-04-01

    This monthly report contains Licensee Event Report (LER) operational information that was processed into the LER data file of the Nuclear Safety Information Center (NSIC) during the one-month period identified on the cover of the document. The LERS, from which this information is derived, are submitted to the Nuclear Regulatory Commission (NRC) by nuclear power plant licensees in accordance with federal regulations. Procedures for LER reporting for revisions to those events occurring prior to 1984 are described in NRC Regulatory Guide 1.16 and NUREG-1061, Instructions for preparation of data entry sheets for licensee event reports. For those events occurring on and after January 1, 1984, LERs are being submitted in accordance with the revised rule contained in Title 10 Part 50.73 of the Code of Federal Regulations (10 CFR 50.73 - Licensee Event Report System) which was published in the Federal Register (Vol. 48, No. 144) on July 26, 1983. NUREG-1022, Licensee Event Report System - Description of systems and guidelines for reporting, provides supporting guidance and information on the revised LER rule.

  4. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    Science.gov (United States)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  5. 12 CFR 906.5 - Monthly interest rate survey.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Monthly interest rate survey. 906.5 Section 906... OPERATIONS OPERATIONS Monthly Interest Rate Survey (MIRS) § 906.5 Monthly interest rate survey. The Finance Board conducts its Monthly Survey of Rates and Terms on Conventional One-Family Non-farm Mortgage...

  6. Licensee Event Report (LER) compilation for month of February 1984. Vol. 3, No. 2

    Energy Technology Data Exchange (ETDEWEB)

    1984-03-01

    This monthly report contains LER operational information that was processed into the LER data file of the Nuclear Safety Information Center (NSIC) during this period. The LER summaries are arranged alphabetically by facility name and then chronologically by event date for each facility. Component, system, keyword, and component vendor indexes follow the summaries.

  7. A Fresh Look at Flooring Costs. A Report on a Survey of User Experience Compiled by Armstrong Cork Company.

    Science.gov (United States)

    Armstrong Cork Co., Lancaster, PA.

    Survey information based on actual flooring installations in several types of buildings and traffic conditions, representing nearly 113 million square feet of actual user experience, is contained in this comprehensive report compiled by the Armstrong Cork Company. The comparative figures provided by these users clearly establish that--(1) the…

  8. This Month in Astronomical History: Preliminary Survey Results

    Science.gov (United States)

    Wilson, Teresa

    2017-01-01

    This Month in Astronomical History is a short (~500 word) column on the AAS website that revisits significant astronomical events or the lives of people who have made a large impact on the field. The monthly column began in July 2016 at the request of the Historical Astronomical Division. Examples of topics that have been covered include Comet Shoemaker-Levy’s collision with Jupiter, the discovery of the moons of Mars, the life of Edwin Hubble, Maria Mitchell’s comet discovery, and the launch of Sputnik II. A survey concerning the column is in progress to ensure the column addresses the interests and needs of a broad readership, including historians, educators, research astronomers, and the general public. Eleven questions focus on the style and content of the column, while eight collect simple demographics. The survey has been available on the AAS website since and was mentioned in several AAS newsletters; however, non-members of AAS were also recruited to include respondents from a variety of backgrounds. Preliminary results of the survey are presented and will be used to hone the style and content of the column to serve the widest possible audience. Responses continue to be collected at: https://goo.gl/forms/Lhwl2aWJl2Vkoo7v1

  9. Bedrock Outcrop Points Compilation

    Data.gov (United States)

    Vermont Center for Geographic Information — A compilation of bedrock outcrops as points and/or polygons from 1:62,500 and 1:24,000 geologic mapping by the Vermont Geological Survey, the United States...

  10. Compiling Dictionaries

    African Journals Online (AJOL)

    Information Technology

    quiring efficient techniques. The text corpus .... make the process of compiling a dictionary simpler and more efficient. If we are ever ... need a mass production technique. ..... Mapping semantic relationships in the lexicon using lexical functions.

  11. 77 FR 19610 - Proposed Information Collection; Comment Request; Advance Monthly Retail Trade Survey

    Science.gov (United States)

    2012-04-02

    ... Retail Trade Survey AGENCY: U.S. Census Bureau, Department of Commerce. ACTION: Notice. SUMMARY: The... INFORMATION: I. Abstract The Advance Monthly Retail Trade Survey (MARTS) provides an early indication of monthly sales for firms located in the United States and classified in the Retail Trade or Food...

  12. Nuclear fuel cycle risk assessment: survey and computer compilation of risk-related literature. [Once-through Cycle and Plutonium Recycle

    Energy Technology Data Exchange (ETDEWEB)

    Yates, K.R.; Schreiber, A.M.; Rudolph, A.W.

    1982-10-01

    The US Nuclear Regulatory Commission has initiated the Fuel Cycle Risk Assessment Program to provide risk assessment methods for assistance in the regulatory process for nuclear fuel cycle facilities other than reactors. Both the once-through cycle and plutonium recycle are being considered. A previous report generated by this program defines and describes fuel cycle facilities, or elements, considered in the program. This report, the second from the program, describes the survey and computer compilation of fuel cycle risk-related literature. Sources of available information on the design, safety, and risk associated with the defined set of fuel cycle elements were searched and documents obtained were catalogued and characterized with respect to fuel cycle elements and specific risk/safety information. Both US and foreign surveys were conducted. Battelle's computer-based BASIS information management system was used to facilitate the establishment of the literature compilation. A complete listing of the literature compilation and several useful indexes are included. Future updates of the literature compilation will be published periodically. 760 annotated citations are included.

  13. An X-ray survey of clusters of galaxies. IV - A survey of southern clusters and a compilation of upper limits for both Abell and southern clusters

    Science.gov (United States)

    Kowalski, M. P.; Ulmer, M. P.; Cruddace, R. G.; Wood, K. S.

    1984-12-01

    The results of the HEAO 1 A-1 X-ray survey of galaxy clusters are reported. X-ray error boxes and intensities are presented for all clusters in the Abell catalog and for the catalog of southern clusters and groups compiled by Duus and Newell (1977). A correlation is derived on the basis of the X-ray luminosity function for 2-6 keV which may be used to calculate the contribution of clusters to the diffuse X-ray background at different energies. The cluster X-ray is estimated to be 9.3 percent (+ 1.9 or - 1.5 percent). Correlations between X-ray luminosity and other cluster properties are exmained, and it is found that the distribution of upper limits may be applied to obtaining a more precise estimate of the average X-ray luminosity of clusters. The Abell richness class and southern cluster concentrations were strongly correlated with X-ray luminosity. Correlations between optical x-ray luminosity and optical radius velocity dispersion, spiral fraction, and radio power are analyzed. The evidence for all these correlations was considered to be weak because of poor scatter in the data.

  14. Why do airlines want and use thrust reversers? A compilation of airline industry responses to a survey regarding the use of thrust reversers on commercial transport airplanes

    Science.gov (United States)

    Yetter, Jeffrey A.

    1995-01-01

    Although thrust reversers are used for only a fraction of the airplane operating time, their impact on nacelle design, weight, airplane cruise performance, and overall airplane operating and maintenance expenses is significant. Why then do the airlines want and use thrust reversers? In an effort to understand the airlines need for thrust reversers, a survey of the airline industry was made to determine why and under what situations thrust reversers are currently used or thought to be needed. The survey was intended to help establish the cost/benefits trades for the use of thrust reversers and airline opinion regarding alternative deceleration devices. A compilation and summary of the responses given to the survey questionnaire is presented.

  15. Carbapenemase-Producing Klebsiella pneumoniae in Romania : A Six-Month Survey

    NARCIS (Netherlands)

    Lixandru, Brandusa Elena; Cotar, Ani Ioana; Straut, Monica; Usein, Codruta Romanita; Cristea, Dana; Ciontea, Simona; Tatu-Chitoiu, Dorina; Codita, Irina; Rafila, Alexandru; Nica, Maria; Buzea, Mariana; Baicus, Anda; Ghita, Mihaela Camelia; Nistor, Irina; Tuchilus, Cristina; Indreas, Marina; Antohe, Felicia; Glasner, Corinna; Grundmann, Hajo; Jasir, Aftab; Damian, Maria

    2015-01-01

    This study presents the first characterization of carbapenem-non-susceptible Klebsiella pneumoniae isolates by means of a structured six-month survey performed in Romania as part of an Europe-wide investigation. Klebsiella pneumoniae clinical isolates from different anatomical sites were tested for

  16. Carbapenemase-Producing Klebsiella pneumoniae in Romania : A Six-Month Survey

    NARCIS (Netherlands)

    Lixandru, Brandusa Elena; Cotar, Ani Ioana; Straut, Monica; Usein, Codruta Romanita; Cristea, Dana; Ciontea, Simona; Tatu-Chitoiu, Dorina; Codita, Irina; Rafila, Alexandru; Nica, Maria; Buzea, Mariana; Baicus, Anda; Ghita, Mihaela Camelia; Nistor, Irina; Tuchilus, Cristina; Indreas, Marina; Antohe, Felicia; Glasner, Corinna; Grundmann, Hajo; Jasir, Aftab; Damian, Maria

    2015-01-01

    This study presents the first characterization of carbapenem-non-susceptible Klebsiella pneumoniae isolates by means of a structured six-month survey performed in Romania as part of an Europe-wide investigation. Klebsiella pneumoniae clinical isolates from different anatomical sites were tested for

  17. The Compiler Forest

    OpenAIRE

    Budiu, Mihai; Galenson, Joel; Plotkin, Gordon D.

    2013-01-01

    We address the problem of writing compilers targeting complex execution environments, such as computer clusters composed of machines with multi-core CPUs. To that end we introduce partial compilers. These compilers can pass sub-programs to several child (partial) compilers, combining the code generated by their children to generate the final target code. We define a set of high-level polymorphic operations manipulating both compilers and partial compilers as first-class values. These mechanis...

  18. The U.S. Geological Survey Monthly Water Balance Model Futures Portal

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; Markstrom, Steven L.; Emmerich, Christopher; Talbert, Marian

    2017-05-03

    The U.S. Geological Survey Monthly Water Balance Model Futures Portal (https://my.usgs.gov/mows/) is a user-friendly interface that summarizes monthly historical and simulated future conditions for seven hydrologic and meteorological variables (actual evapotranspiration, potential evapotranspiration, precipitation, runoff, snow water equivalent, atmospheric temperature, and streamflow) at locations across the conterminous United States (CONUS).The estimates of these hydrologic and meteorological variables were derived using a Monthly Water Balance Model (MWBM), a modular system that simulates monthly estimates of components of the hydrologic cycle using monthly precipitation and atmospheric temperature inputs. Precipitation and atmospheric temperature from 222 climate datasets spanning historical conditions (1952 through 2005) and simulated future conditions (2020 through 2099) were summarized for hydrographic features and used to drive the MWBM for the CONUS. The MWBM input and output variables were organized into an open-access database. An Open Geospatial Consortium, Inc., Web Feature Service allows the querying and identification of hydrographic features across the CONUS. To connect the Web Feature Service to the open-access database, a user interface—the Monthly Water Balance Model Futures Portal—was developed to allow the dynamic generation of summary files and plots  based on plot type, geographic location, specific climate datasets, period of record, MWBM variable, and other options. Both the plots and the data files are made available to the user for download 

  19. CUMULATIVE TRAUMAS AND RISK THRESHOLDS: 12-MONTH PTSD IN THE WORLD MENTAL HEALTH (WMH) SURVEYS

    Science.gov (United States)

    Karam, Elie G.; Friedman, Matthew J.; Hill, Eric D.; Kessler, Ronald C.; McLaughlin, Katie A.; Petukhova, Maria; Sampson, Laura; Shahly, Victoria; Angermeyer, Matthias C.; Bromet, Evelyn J.; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Ferry, Finola; Florescu, Silvia E.; Haro, Josep Maria; He, Yanling; Karam, Aimee N.; Kawakami, Norito; Kovess-Masfety, Viviane; Medina-Mora, María Elena; Browne, Mark A. Oakley; Posada-Villa, José A.; Shalev, Arieh Y.; Stein, Dan J.; Viana, Maria Carmen; Zarkov, Zahari; Koenen, Karestan C.

    2014-01-01

    Background Clinical research suggests that posttraumatic stress disorder (PTSD) patients exposed to multiple traumatic events (TEs) rather than a single TE have increased morbidity and dysfunction. Although epidemiological surveys in the United States and Europe also document high rates of multiple TE exposure, no population-based cross-national data have examined this issue. Methods Data were analyzed from 20 population surveys in the World Health Organization World Mental Health Survey Initiative (n 51,295 aged 18+). The Composite International Diagnostic Interview (3.0) assessed 12-month PTSD and other common DSM-IV disorders. Respondents with 12-month PTSD were assessed for single versus multiple TEs implicated in their symptoms. Associations were examined with age of onset (AOO), functional impairment, comorbidity, and PTSD symptom counts. Results 19.8% of respondents with 12-month PTSD reported that their symptoms were associated with multiple TEs. Cases who associated their PTSD with four or more TEs had greater functional impairment, an earlier AOO, longer duration, higher comorbidity with mood and anxiety disorders, elevated hyper-arousal symptoms, higher proportional exposures to partner physical abuse and other types of physical assault, and lower proportional exposure to unexpected death of a loved one than cases with fewer associated TEs. Conclusions A risk threshold was observed in this large-scale cross-national database wherein cases who associated their PTSD with four or more TEs presented a more “complex” clinical picture with substantially greater functional impairment and greater morbidity than other cases of PTSD. PTSD cases associated with four or more TEs may merit specific and targeted intervention strategies. Depression and Anxiety 31:130–142, 2014. PMID:23983056

  20. Cumulative traumas and risk thresholds: 12-month PTSD in the World Mental Health (WMH) surveys.

    Science.gov (United States)

    Karam, Elie G; Friedman, Matthew J; Hill, Eric D; Kessler, Ronald C; McLaughlin, Katie A; Petukhova, Maria; Sampson, Laura; Shahly, Victoria; Angermeyer, Matthias C; Bromet, Evelyn J; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Ferry, Finola; Florescu, Silvia E; Haro, Josep Maria; He, Yanling; Karam, Aimee N; Kawakami, Norito; Kovess-Masfety, Viviane; Medina-Mora, María Elena; Browne, Mark A Oakley; Posada-Villa, José A; Shalev, Arieh Y; Stein, Dan J; Viana, Maria Carmen; Zarkov, Zahari; Koenen, Karestan C

    2014-02-01

    Clinical research suggests that posttraumatic stress disorder (PTSD) patients exposed to multiple traumatic events (TEs) rather than a single TE have increased morbidity and dysfunction. Although epidemiological surveys in the United States and Europe also document high rates of multiple TE exposure, no population-based cross-national data have examined this issue. Data were analyzed from 20 population surveys in the World Health Organization World Mental Health Survey Initiative (n = 51,295 aged 18+). The Composite International Diagnostic Interview (3.0) assessed 12-month PTSD and other common DSM-IV disorders. Respondents with 12-month PTSD were assessed for single versus multiple TEs implicated in their symptoms. Associations were examined with age of onset (AOO), functional impairment, comorbidity, and PTSD symptom counts. 19.8% of respondents with 12-month PTSD reported that their symptoms were associated with multiple TEs. Cases who associated their PTSD with four or more TEs had greater functional impairment, an earlier AOO, longer duration, higher comorbidity with mood and anxiety disorders, elevated hyperarousal symptoms, higher proportional exposures to partner physical abuse and other types of physical assault, and lower proportional exposure to unexpected death of a loved one than cases with fewer associated TEs. A risk threshold was observed in this large-scale cross-national database wherein cases who associated their PTSD with four or more TEs presented a more "complex" clinical picture with substantially greater functional impairment and greater morbidity than other cases of PTSD. PTSD cases associated with four or more TEs may merit specific and targeted intervention strategies. © 2013 Wiley Periodicals, Inc.

  1. The U.S. Geological Survey Monthly Water Balance Model Futures Portal

    Science.gov (United States)

    Bock, Andy

    2017-03-16

    Simulations of future climate suggest profiles of temperature and precipitation may differ significantly from those in the past. These changes in climate will likely lead to changes in the hydrologic cycle. As such, natural resource managers are in need of tools that can provide estimates of key components of the hydrologic cycle, uncertainty associated with the estimates, and limitations associated with the climate forcing data used to estimate these components. To help address this need, the U.S. Geological Survey Monthly Water Balance Model Futures Portal (https://my.usgs.gov/mows/) provides a user friendly interface to deliver hydrologic and meteorological variables for monthly historic and potential future climatic conditions across the continental United States.

  2. Principles of compilers

    CERN Document Server

    Su, Yunlin

    2011-01-01

    ""Principles of Compilers: A New Approach to Compilers Including the Algebraic Method"" introduces the ideas of the compilation from the natural intelligence of human beings by comparing similarities and differences between the compilations of natural languages and programming languages. The notation is created to list the source language, target languages, and compiler language, vividly illustrating the multilevel procedure of the compilation in the process. The book thoroughly explains the LL(1) and LR(1) parsing methods to help readers to understand the how and why. It not only covers estab

  3. Quit and Smoking Reduction Rates in Vape Shop Consumers: A Prospective 12-Month Survey

    Directory of Open Access Journals (Sweden)

    Riccardo Polosa

    2015-03-01

    Full Text Available Aims: Here, we present results from a prospective pilot study that was aimed at surveying changes in daily cigarette consumption in smokers making their first purchase at vape shops. Modifications in products purchase were also noted. Design: Participants were instructed how to charge, fill, activate and use their e-cigarettes (e-cigs. Participants were encouraged to use these products in the anticipation of reducing the number of cig/day smoked. Settings: Staff from LIAF contacted 10 vape shops in the province of the city of Catania (Italy that acted as sponsors to the 2013 No Tobacco Day. Participants: 71 adult smokers (≥18 years old making their first purchase at local participating vape shops were asked by professional retail staff to complete a form. Measurements: Their cigarette consumption was followed-up prospectively at 6 and 12 months. Details of products purchase (i.e., e-cigs hardware, e-liquid nicotine strengths and flavours were also noted. Findings: Retention rate was elevated, with 69% of participants attending their final follow-up visit. At 12 month, 40.8% subjects could be classified as quitters, 25.4% as reducers and 33.8% as failures. Switching from standard refillables (initial choice to more advanced devices (MODs was observed in this study (from 8.5% at baseline to 18.4% at 12 month as well as a trend in decreasing thee-liquid nicotine strength, with more participants adopting low nicotine strength (from 49.3% at baseline to 57.1% at 12 month. Conclusions: We have found that smokers purchasing e-cigarettes from vape shops with professional advice and support can achieve high success rates.

  4. Evidencing a prominent Moho topography beneath the Iberian-Western Mediterranean Region, compiled from controlled-source and natural seismic surveys

    Science.gov (United States)

    Diaz, Jordi; Gallart, Josep; Carbonell, Ramon

    2016-04-01

    The complex tectonic interaction processes between the European and African plates at the Western Mediterranean since Mesozoic times have left marked imprints in the present-day crustal architecture of this area, particularly as regarding the lateral variations in crustal and lithospheric thicknesses. The detailed mapping of such variations is essential to understand the regional geodynamics, as it provides major constraints for different seismological, geophysical and geodynamic modeling methods both at lithospheric and asthenospheric scales. Since the 1970s, the lithospheric structure beneath the Iberian Peninsula and its continental margins has been extensively investigated using deep multichannel seismic reflection and refraction/wide-angle reflection profiling experiments. Diaz and Gallart (2009) presented a compilation of the results then available beneath the Iberian Peninsula. In order to improve the picture of the whole region, we have now extended the geographical area to include northern Morocco and surrounding waters. We have also included in the compilation the results arising from all the seismic surveys performed in the area and documented in the last few years. The availability of broad-band sensors and data-loggers equipped with large storage capabilities has allowed in the last decade to boost the investigations on crustal and lithospheric structure using natural seismicity, providing a spatial resolution never achieved before. The TopoIberia-Iberarray network, deployed over Iberia and northern Morocco, has provided a good example of those new generation seismic experiments. The data base holds ~300 sites, including the permanent networks in the area and hence forming a unique seismic database in Europe. In this contribution, we retrieve the results on crustal thickness presented by Mancilla and Diaz (2015) using data from the TopoIberia and associated experiments and we complement them with additional estimations beneath the Rif Cordillera

  5. The 60-month all-sky BAT Survey of AGN and the Anisotropy of Nearby AGN

    Energy Technology Data Exchange (ETDEWEB)

    Ajello, M.; /KIPAC, Menlo Park; Alexander, D.M.; /Durham U.; Greiner, J.; /Garching, Max Planck Inst., MPE; Madejski, G.M.; /KIPAC, Menlo Park; Gehrels, N.; /NASA, Goddard; Burlon, D.; /Garching, Max Planck Inst., MPE

    2012-04-02

    Surveys above 10 keV represent one of the the best resources to provide an unbiased census of the population of Active Galactic Nuclei (AGN). We present the results of 60 months of observation of the hard X-ray sky with Swift/BAT. In this timeframe, BAT detected (in the 15-55 keV band) 720 sources in an all-sky survey of which 428 are associated with AGN, most of which are nearby. Our sample has negligible incompleteness and statistics a factor of {approx}2 larger over similarly complete sets of AGN. Our sample contains (at least) 15 bona-fide Compton-thick AGN and 3 likely candidates. Compton-thick AGN represent a {approx}5% of AGN samples detected above 15 keV. We use the BAT dataset to refine the determination of the LogN-LogS of AGN which is extremely important, now that NuSTAR prepares for launch, towards assessing the AGN contribution to the cosmic X-ray background. We show that the LogN-LogS of AGN selected above 10 keV is now established to a {approx}10% precision. We derive the luminosity function of Compton-thick AGN and measure a space density of 7.9{sub -2.9}{sup +4.1} x 10{sup -5} Mpc{sup -3} for objects with a de-absorbed luminosity larger than 2 x 10{sup 42} erg s{sup -1}. As the BAT AGN are all mostly local, they allow us to investigate the spatial distribution of AGN in the nearby Universe regardless of absorption. We find concentrations of AGN that coincide spatially with the largest congregations of matter in the local ({le} 85 Mpc) Universe. There is some evidence that the fraction of Seyfert 2 objects is larger than average in the direction of these dense regions.

  6. Women’s perceptions about reducing the frequency of monthly bleeding: results from a multinational survey

    Directory of Open Access Journals (Sweden)

    Szarewski A

    2013-05-01

    Full Text Available Anne Szarewski,1 Cecilia Moeller2 1Centre for Cancer Prevention, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London, United Kingdom; 2Bayer HealthCare Pharmaceuticals, Global Market Research General Medicine, Berlin, Germany Background: Monthly bleeding can have a negative impact on daily life and, given the choice, many women would reduce the frequency of bleeding. While some women choose to occasionally postpone or reduce bleeding frequency with an oral contraceptive (OC, most women have no or limited experience of regularly reducing the frequency of scheduled bleeding with OCs, ie, the extended OC regimen. Study design: An online survey of 4039 women aged 15–49 years who were currently using, had used, or would consider using any form of hormonal contraception was conducted in Brazil, Canada, Czech Republic, France, Germany, Italy, UK, and USA to assess awareness of and the reasons for and against reducing bleeding frequency. Results: Overall, 51.1% and 30.7% of women surveyed were aware that they could occasionally or regularly reduce bleeding frequency with an OC. Moreover, 27.6% and 9.9% of previous/current OC users had occasionally or regularly reduced bleeding frequency with an OC. The main reasons for reducing bleeding frequency were convenience, physician recommendations, special events, and relief of problems associated with bleeding. Many women mistakenly believed that reducing bleeding frequency would have a negative health impact. Conclusion: Additional efforts are needed to educate women about the possibility and potential health benefits of reducing bleeding frequency and to dispel misconceptions about the use of extended OC regimens. Keywords: extended regimen, menstruation, oral contraceptive, withdrawal bleeding, scheduled bleeding

  7. Calculating correct compilers

    DEFF Research Database (Denmark)

    Bahr, Patrick; Hutton, Graham

    2015-01-01

    In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...

  8. Engineering a compiler

    CERN Document Server

    Cooper, Keith D

    2012-01-01

    As computing has changed, so has the role of both the compiler and the compiler writer. The proliferation of processors, environments, and constraints demands an equally large number of compilers. To adapt, compiler writers retarget code generators, add optimizations, and work on issues such as code space or power consumption. Engineering a Compiler re-balances the curriculum for an introductory course in compiler construction to reflect the issues that arise in today's practice. Authors Keith Cooper and Linda Torczon convey both the art and the science of compiler construction and show best practice algorithms for the major problems inside a compiler. ·Focuses on the back end of the compiler-reflecting the focus of research and development over the last decade ·Applies the well-developed theory behind scanning and parsing to introduce concepts that play a critical role in optimization and code generation. ·Introduces the student to optimization through data-flow analysis, SSA form, and a selection of sc...

  9. Carbapenemase-Producing Klebsiella pneumoniae in Romania: A Six-Month Survey.

    Science.gov (United States)

    Lixandru, Brandusa Elena; Cotar, Ani Ioana; Straut, Monica; Usein, Codruta Romanita; Cristea, Dana; Ciontea, Simona; Tatu-Chitoiu, Dorina; Codita, Irina; Rafila, Alexandru; Nica, Maria; Buzea, Mariana; Baicus, Anda; Ghita, Mihaela Camelia; Nistor, Irina; Tuchiluş, Cristina; Indreas, Marina; Antohe, Felicia; Glasner, Corinna; Grundmann, Hajo; Jasir, Aftab; Damian, Maria

    2015-01-01

    This study presents the first characterization of carbapenem-non-susceptible Klebsiella pneumoniae isolates by means of a structured six-month survey performed in Romania as part of an Europe-wide investigation. Klebsiella pneumoniae clinical isolates from different anatomical sites were tested for antibiotic susceptibility by phenotypic methods and confirmed by PCR for the presence of four carbapenemase genes. Genome macrorestriction fingerprinting with XbaI was used to analyze the relatedness of carbapenemase-producing Klebsiella pneumoniae isolates collected from eight hospitals. Among 75 non-susceptible isolates, 65 were carbapenemase producers. The most frequently identified genotype was OXA-48 (n = 51 isolates), eight isolates were positive for blaNDM-1 gene, four had the blaKPC-2 gene, whereas two were positive for blaVIM-1. The analysis of PFGE profiles of OXA-48 and NDM-1 producing K. pneumoniae suggests inter-hospitals and regional transmission of epidemic clones. This study presents the first description of K. pneumoniae strains harbouring blaKPC-2 and blaVIM-1 genes in Romania. The results of this study highlight the urgent need for the strengthening of hospital infection control measures in Romania in order to curb the further spread of the antibiotic resistance.

  10. Carbapenemase-Producing Klebsiella pneumoniae in Romania: A Six-Month Survey.

    Directory of Open Access Journals (Sweden)

    Brandusa Elena Lixandru

    Full Text Available This study presents the first characterization of carbapenem-non-susceptible Klebsiella pneumoniae isolates by means of a structured six-month survey performed in Romania as part of an Europe-wide investigation. Klebsiella pneumoniae clinical isolates from different anatomical sites were tested for antibiotic susceptibility by phenotypic methods and confirmed by PCR for the presence of four carbapenemase genes. Genome macrorestriction fingerprinting with XbaI was used to analyze the relatedness of carbapenemase-producing Klebsiella pneumoniae isolates collected from eight hospitals. Among 75 non-susceptible isolates, 65 were carbapenemase producers. The most frequently identified genotype was OXA-48 (n = 51 isolates, eight isolates were positive for blaNDM-1 gene, four had the blaKPC-2 gene, whereas two were positive for blaVIM-1. The analysis of PFGE profiles of OXA-48 and NDM-1 producing K. pneumoniae suggests inter-hospitals and regional transmission of epidemic clones. This study presents the first description of K. pneumoniae strains harbouring blaKPC-2 and blaVIM-1 genes in Romania. The results of this study highlight the urgent need for the strengthening of hospital infection control measures in Romania in order to curb the further spread of the antibiotic resistance.

  11. Carbapenemase-Producing Klebsiella pneumoniae in Romania: A Six-Month Survey

    Science.gov (United States)

    Straut, Monica; Usein, Codruta Romanita; Cristea, Dana; Ciontea, Simona; Codita, Irina; Rafila, Alexandru; Nica, Maria; Buzea, Mariana; Baicus, Anda; Ghita, Mihaela Camelia; Nistor, Irina; Tuchiluş, Cristina; Indreas, Marina; Antohe, Felicia; Glasner, Corinna; Grundmann, Hajo; Jasir, Aftab; Damian, Maria

    2015-01-01

    This study presents the first characterization of carbapenem-non-susceptible Klebsiella pneumoniae isolates by means of a structured six-month survey performed in Romania as part of an Europe-wide investigation. Klebsiella pneumoniae clinical isolates from different anatomical sites were tested for antibiotic susceptibility by phenotypic methods and confirmed by PCR for the presence of four carbapenemase genes. Genome macrorestriction fingerprinting with XbaI was used to analyze the relatedness of carbapenemase-producing Klebsiella pneumoniae isolates collected from eight hospitals. Among 75 non-susceptible isolates, 65 were carbapenemase producers. The most frequently identified genotype was OXA-48 (n = 51 isolates), eight isolates were positive for blaNDM-1 gene, four had the blaKPC-2 gene, whereas two were positive for blaVIM-1. The analysis of PFGE profiles of OXA-48 and NDM-1 producing K. pneumoniae suggests inter-hospitals and regional transmission of epidemic clones. This study presents the first description of K. pneumoniae strains harbouring blaKPC-2 and blaVIM-1 genes in Romania. The results of this study highlight the urgent need for the strengthening of hospital infection control measures in Romania in order to curb the further spread of the antibiotic resistance. PMID:26599338

  12. International survey of environmental programmes - a compilation of information from twelve countries received in response to a questionnaire distributed in 1992

    Energy Technology Data Exchange (ETDEWEB)

    Gyllander, C.; Karlberg, O.; Luening, M.; Larsson, C.M.; Johansson, G.

    1995-11-01

    The report compiles information from Cuba, Finland, Germany, Japan, South Korea, Lithuania, Luxembourg, Malaysia, Romania, Sweden, Switzerland and United Kingdom, relevant to the organisation and execution of programmes for environmental surveillance of nuclear facilities (source and environmental monitoring). 28 refs, 19 tabs.

  13. Documentation of methods and inventory of irrigation data collected for the 2000 and 2005 U.S. Geological Survey Estimated use of water in the United States, comparison of USGS-compiled irrigation data to other sources, and recommendations for future compilations

    Science.gov (United States)

    Dickens, Jade M.; Forbes, Brandon T.; Cobean, Dylan S.; Tadayon, Saeid

    2011-01-01

    Every five years since 1950, the U.S. Geological Survey (USGS) National Water Use Information Program (NWUIP) has compiled water-use information in the United States and published a circular report titled "Estimated use of water in the United States," which includes estimates of water withdrawals by State, sources of water withdrawals (groundwater or surface water), and water-use category (irrigation, public supply, industrial, thermoelectric, and so forth). This report discusses the impact of important considerations when estimating irrigated acreage and irrigation withdrawals, including estimates of conveyance loss, irrigation-system efficiencies, pasture, horticulture, golf courses, and double cropping.

  14. Kokkos GPU Compiler

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-15

    The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities have focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.

  15. Biomimetics for NASA Langley Research Center: Year 2000 Report of Findings From a Six-Month Survey

    Science.gov (United States)

    Siochi, Emilie J.; Anders, John B., Jr.; Cox, David E.; Jegley, Dawn C.; Fox, Robert L.; Katzberg, Stephen J.

    2002-01-01

    This report represents an attempt to see if some of the techniques biological systems use to maximize their efficiency can be applied to the problems NASA faces in aeronautics and space exploration. It includes an internal survey of resources available at NASA Langley Research Center for biomimetics research efforts, an external survey of state of the art in biomimetics covering the Materials, Structures, Aerodynamics, Guidance and Controls areas. The Biomimetics Planning team also included ideas for potential research areas, as well as recommendations on how to implement this new program. This six-month survey was conducted in the second half of 1999.

  16. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Average Monthly Precipitation, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents the average monthly precipitation in millimeters multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...

  17. 12-Month and Lifetime Prevalence of Suicide Attempts among Black Adolescents in the National Survey of American Life

    Science.gov (United States)

    Joe, Sean; Baser, Raymond S.; Neighbors, Harold W.; Caldwell, Cleopatra H.; Jackson, James S.

    2009-01-01

    The data from the National Survey of American life on the suicidal behavior of 1,170 African American and Caribbean black adolescents aged 13 to 17 shows that black adolescents report having a lifetime prevalence of 7.5 percent for suicidal ideation and 2.7 percent for attempts. The 12-month prevalence of suicidal ideation is 3.2 percent and…

  18. Twelve-Month Prevalence of and Risk Factors for Suicide Attempts in the World Health Organization World Mental Health Surveys

    NARCIS (Netherlands)

    Borges, Guilherme; Nock, Matthew K.; Haro Abad, Josep M.; Hwang, Irving; Sampson, Nancy A.; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias C.; Beautrais, Annette; Bromet, Evelyn; Bruffaerts, Ronny; de Girolamo, Giovanni; Florescu, Silvia; Gureje, Oye; Hu, Chiyi; Karam, Elie G.; Kovess-Masfety, Viviane; Lee, Sing; Levinson, Daphna; Elena Medina-Mora, Maria; Ormel, Johan; Posada-Villa, Jose; Sagar, Rajesh; Tomov, Toma; Uda, Hidenori; Williams, David R.; Kessler, Ronald C.

    2010-01-01

    Objective: Although suicide is a leading cause of death worldwide, clinicians and researchers lack a data-driven method to assess the risk of suicide attempts. This study reports the results of an analysis of a large cross-national epidemiologic survey database that estimates the 12-month prevalence

  19. 76 FR 1131 - Proposed Information Collection; Comment Request; Monthly Retail Trade Survey

    Science.gov (United States)

    2011-01-07

    ...-month merchandise inventories, and quarterly e- commerce sales of retailers in the United States by... are requested to report sales, e-commerce sales, and/or inventories each month. The sample, consisting...-44(06)S Non Department Store/Sales Only/WO E-Commerce. SM-44(06)SE Non Department Store/Sales Only W...

  20. Gravity Data for Southwestern Alaska (1294 records compiled)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The gravity station data (1294 records) were compiled by the Alaska Geological Survey and the U.S. Geological Survey, Menlo Park, California. This data base was...

  1. A Compilation of Vs30 Values in the United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Compiled Vs30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, there are 2,997 sites in the...

  2. The European CRT Survey : 1 year (9-15 months) follow-up results

    NARCIS (Netherlands)

    Bogale, Nigussie; Priori, Silvia; Cleland, John G. F.; Brugada, Josep; Linde, Cecilia; Auricchio, Angelo; van Veldhuisen, Dirk J.; Limbourg, Tobias; Gitt, Anselm; Gras, Daniel; Stellbrink, Christoph; Gasparini, Maurizio; Metra, Marco; Derumeaux, Genevieve; Gadler, Fredrik; Buga, Laszlo; Dickstein, Kenneth

    2012-01-01

    Aims The European CRT Survey is a joint initiative of the Heart Failure Association (HFA) and the European Heart Rhythm Association (EHRA) of the European Society of Cardiology evaluating the contemporary implantation practice of cardiac resynchronization therapy (CRT) in Europe. Methods and results

  3. The European CRT Survey : 1 year (9-15 months) follow-up results

    NARCIS (Netherlands)

    Bogale, Nigussie; Priori, Silvia; Cleland, John G. F.; Brugada, Josep; Linde, Cecilia; Auricchio, Angelo; van Veldhuisen, Dirk J.; Limbourg, Tobias; Gitt, Anselm; Gras, Daniel; Stellbrink, Christoph; Gasparini, Maurizio; Metra, Marco; Derumeaux, Genevieve; Gadler, Fredrik; Buga, Laszlo; Dickstein, Kenneth

    Aims The European CRT Survey is a joint initiative of the Heart Failure Association (HFA) and the European Heart Rhythm Association (EHRA) of the European Society of Cardiology evaluating the contemporary implantation practice of cardiac resynchronization therapy (CRT) in Europe. Methods and results

  4. Compilation of Shona Children's

    African Journals Online (AJOL)

    Mev. R.B. Ruthven

    Peniah Mabaso, African Languages Research Institute (ALRI), University of. Zimbabwe, Harare ... thirteen years age group and their teachers. Student ... The Compilation of a Shona Children's Dictionary: Challenges and Solutions. 113 language .... The current orthography is linguistically constricting in a number of ways.

  5. Embedded Processor Oriented Compiler Infrastructure

    Directory of Open Access Journals (Sweden)

    DJUKIC, M.

    2014-08-01

    Full Text Available In the recent years, research of special compiler techniques and algorithms for embedded processors broaden the knowledge of how to achieve better compiler performance in irregular processor architectures. However, industrial strength compilers, besides ability to generate efficient code, must also be robust, understandable, maintainable, and extensible. This raises the need for compiler infrastructure that provides means for convenient implementation of embedded processor oriented compiler techniques. Cirrus Logic Coyote 32 DSP is an example that shows how traditional compiler infrastructure is not able to cope with the problem. That is why the new compiler infrastructure was developed for this processor, based on research. in the field of embedded system software tools and experience in development of industrial strength compilers. The new infrastructure is described in this paper. Compiler generated code quality is compared with code generated by the previous compiler for the same processor architecture.

  6. Elements of compiler design

    CERN Document Server

    Meduna, Alexander

    2007-01-01

    PREFACEINTRODUCTIONMathematical PreliminariesCompilationRewriting SystemsLEXICAL ANALYSISModelsMethodsTheorySYNTAX ANALYSISModelsMethodsTheoryDETERMINISTIC TOP-DOWN PARSINGPredictive Sets and LL GrammarsPredictive ParsingDETERMINISTIC BOTTOM-UP PARSINGPrecedence ParsingLR ParsingSYNTAX-DIRECTED TRANSLATION AND INTERMEDIATE CODE GENERATIONBottom-Up Syntax-Directed Translation and Intermediate Code GenerationTop-Down Syntax-Directed TranslationSymbol TableSemantic AnalysisSoftw

  7. Metallurgy: A compilation

    Science.gov (United States)

    1972-01-01

    A compilation on the technical uses of various metallurgical processes is presented. Descriptions are given of the mechanical properties of various alloys, ranging from TAZ-813 at 2200 F to investment cast alloy 718 at -320 F. Methods are also described for analyzing some of the constituents of various alloys from optical properties of carbide precipitates in Rene 41 to X-ray spectrographic analysis of the manganese content of high chromium steels.

  8. Data on cardiovascular and pulmonary diseases among smokers of menthol and non-menthol cigarettes compiled from the National Health and Nutrition Examination Survey (NHANES, 1999–2012

    Directory of Open Access Journals (Sweden)

    Cynthia Van Landingham

    2017-06-01

    Full Text Available This Data in Brief contains results from three different survey logistic regression models comparing risks of self-reported diagnoses of cardiovascular and pulmonary diseases among smokers of menthol and non-menthol cigarettes. Analyses employ data from National Health and Nutrition Examination Survey (NHANES cycles administered between 1999 and 2012, combined and in subsets. Raw data may be downloaded from the National Center for Health Statistics. Results were not much affected by which covariates were included in the models, but depended strongly on the NHANES cycles included in the analysis. All three models returned elevated risk estimates for three endpoints when they were run in individual NHANES cycles (congestive heart failure in 2001–02; hypertension in 2003–04; and chronic obstructive pulmonary disease in 2005–06, and all three models returned null results for these endpoints when data from 1999–2012 were combined.

  9. Survey of 82 cases of meningitis in infants under 2 months of age

    Directory of Open Access Journals (Sweden)

    Fatehi I

    1998-06-01

    Full Text Available In this study we review 82 infants under two months with bacterial meningitis admitted in Tehran University's hospitals during a 14 year period. Male to female ratio was 1.4 to 1. The patterns of predominance among bacterial pathogens changed during the period of study. During the first six years the most common pathogens were Salmonella-SP., but during the later years E.coli became the predominant pathogen, and also meningitis caused by GBS and Staph. epidermidis was observed. The case fatality rate was 37.8 percent. The antibiogram revealed that E.coli were hundred percent resistant to ampicillin and 50% resistant to gentamicin, 40% of all bacteria isolated were resistant to ampicillin and gentamicin. These findings provide guidelines for the selection of empiric antimicrobial agents in our country

  10. Compilation of geology, mineralization, geochemistry and geophysical study of IP/RS & ground magnetic survey at Roudgaz area, southeast of Gonabad, Khorasan Razavi province

    Directory of Open Access Journals (Sweden)

    Hossein Hajimirzajan

    2013-04-01

    Full Text Available Roudgaz prospect area is a Cu, Sn, Pb, Zn, and Au polymetal vein system located to the southeast of Gonabad and in the northeast of Lut block. Oxidan subvolcanic Tertiary rocks with monzonite to monzodiorite porphyry composition intruded the metamorphic rocks of middle Jurassic. The majority of intrusive bodies are affected by carbonation, argillic, sericitic, and silicification-tourmaline alteration. Mineralization in the area is controlled by fault and is present as vein with domination of NW-SE direction and 85-90º dip. Primary minerals are quartz, tourmaline, chalcopyrite, pyrite, and secondary minerals are malachite, azurite, and goethite. Geochemical sampling using chip composite method indicated high anomalies of Cu, Sn, Pb, and As (up to 10000 ppm, Zn (up to 5527 ppm, and Au (up to 325 ppb. Broad gossan zone is present in the area and is related to the oxidation of sulfide minerals. IP/RS survey was performed over the geochemical anomalies for identification of the location and extension of sulfide mineralization at depth. Generally, chargeability increases in gossan zones, veins, old workings and geochemical anomalies. Resistivity over the quartzite unit and also in locations where mineralized vein is associated with quartz has a high anomaly of up to 425 ohm-m. Due to high geochemical anomaly of Sn and its relation with reduced subvolcanic intrusives, ground magnetic survey was performed to identify the location of magnetite (oxidant and ilmenite (reduced series at depth. Variation of Total Magnetic Intensity (TMI is 335.1 Gamma in the TMI map. The highest magnetic anomalies in the RTP map are located to the north of the survey area which is related to magnetite series (hornblende biotite monzodiorite porphyry and extend to the south at depth. The lowest magnetic anomaly is located to the center of the survey area and particularly to the east of the Roudgaz village correlating the highest chargeability and geochemical anomaly. Based

  11. Fault-Tree Compiler

    Science.gov (United States)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  12. The high resolution topographic evolution of an active retrogressive thaw slump compiled from a decade of photography, ground surveys, laser scans and satellite imagery

    Science.gov (United States)

    Crosby, B. T.; Barnhart, T. B.; Rowland, J. C.

    2015-12-01

    Remote sensing imagery has enables the temporal reconstruction of thermal erosion features including lakes, shorelines and hillslope failures in remote Arctic locations, yet these planar data limit analysis to lines and areas. This study explores the application of varying techniques to reconstruct the three dimensional evolution of a single thermal erosion feature using a mixture of opportunistic oblique photos, ground surveys and satellite imagery. At the Selawik River retrogressive thaw slump in northwest Alaska, a bush plane collected oblique aerial photos when the feature was first discovered in 2004 and in subsequent years. These images were recently processed via Structure from Motion to generate georeferenced point clouds for the years prior to the initiation of our research. High resolution ground surveys in 2007, 2009 and 2010 were completed using robotic total station. Terrestrial laser scans (TLS) were collected in the summers of 2011 and 2012. Analysis of stereo satellite imagery from 2012 and 2015 enable continued monitoring of the feature after ground campaigns ended. As accurate coregistraion between point clouds is vital to topographic change detection, all prior and subsequent datasets were georeferenced to stable features observed in the 2012 TLS scan. Though this coregistration introduces uncertainty into each image, the magnitudes of uncertainty are significantly smaller than the topographic changes detected. Upslope retreat of the slump headwall generally decreases over time as the slump floor progresses from a highly dissected gully topography to a low relief, earthflow dominated depositional plane. The decreasing slope of the slump floor diminishes transport capacity, resulting in the progressive burial of the slump headwall, thus decreasing headwall retreat rates. This self-regulation of slump size based on feature relief and transport capacity suggests a capacity to predict the maximum size a given feature can expand to before

  13. A compilation of field surveys on gaseous elemental mercury (GEM) from contrasting environmental settings in Europe, South America, South Africa and China: separating fads from facts.

    Science.gov (United States)

    Higueras, Pablo; Oyarzun, Roberto; Kotnik, Joze; Esbrí, José María; Martínez-Coronado, Alba; Horvat, Milena; López-Berdonces, Miguel Angel; Llanos, Willians; Vaselli, Orlando; Nisi, Barbara; Mashyanov, Nikolay; Ryzov, Vladimir; Spiric, Zdravko; Panichev, Nikolay; McCrindle, Rob; Feng, Xinbin; Fu, Xuewu; Lillo, Javier; Loredo, Jorge; García, María Eugenia; Alfonso, Pura; Villegas, Karla; Palacios, Silvia; Oyarzún, Jorge; Maturana, Hugo; Contreras, Felicia; Adams, Melitón; Ribeiro-Guevara, Sergio; Niecenski, Luise Felipe; Giammanco, Salvatore; Huremović, Jasna

    2014-08-01

    Mercury is transported globally in the atmosphere mostly in gaseous elemental form (GEM, [Formula: see text]), but still few worldwide studies taking into account different and contrasted environmental settings are available in a single publication. This work presents and discusses data from Argentina, Bolivia, Bosnia and Herzegovina, Brazil, Chile, China, Croatia, Finland, Italy, Russia, South Africa, Spain, Slovenia and Venezuela. We classified the information in four groups: (1) mining districts where this contaminant poses or has posed a risk for human populations and/or ecosystems; (2) cities, where the concentration of atmospheric mercury could be higher than normal due to the burning of fossil fuels and industrial activities; (3) areas with natural emissions from volcanoes; and (4) pristine areas where no anthropogenic influence was apparent. All the surveys were performed using portable LUMEX RA-915 series atomic absorption spectrometers. The results for cities fall within a low GEM concentration range that rarely exceeds 30 ng m(-3), that is, 6.6 times lower than the restrictive ATSDR threshold (200 ng m(-3)) for chronic exposure to this pollutant. We also observed this behavior in the former mercury mining districts, where few data were above 200 ng m(-3). We noted that high concentrations of GEM are localized phenomena that fade away in short distances. However, this does not imply that they do not pose a risk for those working in close proximity to the source. This is the case of the artisanal gold miners that heat the Au-Hg amalgam to vaporize mercury. In this respect, while GEM can be truly regarded as a hazard, because of possible physical-chemical transformations into other species, it is only under these localized conditions, implying exposure to high GEM concentrations, which it becomes a direct risk for humans.

  14. A survey on changes in opioid use and risk factors in the survivors eight months after Bam earthquake

    Directory of Open Access Journals (Sweden)

    A. Rahimi Movaghar

    2006-07-01

    Full Text Available Background: In the year 2003, an earthquake in Bam led to death and injury of many of the inhabitants. The aim of this study was to the changes in opioid drug use in the survivers eight months after the earthquake in comparison with the month before the quake and its related factors. Methods: An epidemiologic survey was carried out on 779 survivors, selected by desert sampling from the Bam citizens in the age of 15 and over. Bivariate and multivariate Logestic regression analysis were done for examining the relationship between an increase in opioid use and various factors. Results: An increase in opioid use was reported in 18.3 percent of men and 2.3 percent of women. Odds Ratio (OR for increase in opioid use was 9.4 times more in men than in women (95% CI=4.9-18.0. In men, increase in opioid use was related with the history of opioid use during the month before earthquake (OR=5.6, 95% CI=2.4-13.1, age (OR in age group 30 to 44 was 4.7 times more than age below 30, with 95% CI from 1.8 to12.1, and PTSD (OR=3.7, 95% CI=1.5-9.2. In women, it was only related to the history of opioid use during the month before earthquake (OR=43.8, 95% CI=12.5-154.0. Conclusion: The findings show that following disasters, especially in the areas or groups that drug use is common, an increase in the drug use might occur. In these situations provision of preventive and treatment interventions particularly for at risk population is necessary.

  15. Perception and management of fever in infants up to six months of age: A survey of US pediatricans

    Directory of Open Access Journals (Sweden)

    Markson Leona E

    2010-12-01

    Full Text Available Abstract Background A fever is an increase in the body's temperature above normal. This study examined how US pediatricians perceive and manage fever generally versus fever occurring after vaccination in infants up to six months of age. Methods A web-based survey of 400 US pediatricians subscribing to the Physician Desk Reference was conducted in December 2008. Data were collected on the respondents' socio-demographics, number of years in practice, type of practice, their definition of fever severity in infants, and their recommendations for managing fever. Generalized Estimating Equations were used to estimate the odds of a pediatrician recommending an emergency room visit (ER or a hospital admission, office visits, or other treatment option, as a function of infant's age, temperature, whether the infant has recently received a vaccine, and whether the fever was reported during or after office hours, adjusting for practice type and socio-demographic variables. Results On average, the 400 responding pediatricians' (64% were female, average age of 49 years, years in practice = 20 years threshold for extremely serious fever was ≥39.5°C and ≥ 40.0°C for infants 0-2 month and >2-6 month of age respectively. Infants were more likely to be referred to an ER or hospital admission if they were ≤ 2 months of age (Odds Ratio [OR], 29.13; 95% Confidence interval [95% CI], 23.69-35.82 or >2-4 months old (OR 3.37; 95% CI 2.99-3.81 versus > 4 to 6 months old or if they had a temperature ≥ 40.0°C (OR 21.06; 95% CI 17.20-25.79 versus a temperature of 38.0-38.5°C. Fever after vaccination (OR 0.29; 95% CI 0.25-0.33 or reported during office hours (OR 0.17; 95% CI 0.15-0.20 were less likely to result in referral to ER or hospital admission. Conclusion Within this sample of US pediatricians, perception of the severity of fever in infants, as well as the response to infant fever are likely to depend on the infant's age. Recommendations for the management

  16. Compiler Feedback using Continuous Dynamic Compilation during Development

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Karlsson, Sven; Probst, Christian W.

    2014-01-01

    Optimizing compilers are vital for performance. However, compilers ability to optimize aggressively is limited in some cases. To address this limitation, we have developed a compiler guiding the programmer in making small source code changes, potentially making the source code more amenable...... to optimization. This tool can help programmers understand what the optimizing compiler has done and suggest automatic source code changes in cases where the compiler refrains from optimizing. We have integrated our tool into an integrated development environment, interactively giving feedback as part...... of the programmers development flow. We have evaluated our preliminary implementation and show it can guide to a 12% improvement in performance. Furthermore the tool can be used as an interactive optimization adviser improving the performance of the code generated by a production compiler. Here it can lead to a 153...

  17. Voyager Outreach Compilation

    Science.gov (United States)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation/simulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  18. Polygon boundary describing the source surveys used to build the Bathymetric Terrain Model of the U.S. Atlantic Margin of 100-meter resolution compiled by the U.S. Geological Survey (Esri Shapefile, Geographic WGS 84 Coordinate System)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric Terrain Models (BTMs) of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing...

  19. Polygon Boundary Describing the Source Surveys Used to Build the Bathymetric Terrain Model of the Puerto Rico Trench and Northeastern Caribbean Region Compiled by the U.S. Geological Survey (PRBATHSOURCE, Esri Shapefile, Geographic projection WGS 84).

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models (BTMs) of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing...

  20. Polygon Boundary Describing the Source Surveys Used to Build the Bathymetric Terrain Model of the Puerto Rico Trench and Northeastern Caribbean Region Compiled by the U.S. Geological Survey (PRBATHSOURCE, Esri Shapefile, Geographic projection WGS 84).

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models (BTMs) of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing...

  1. The 60 Month All-Sky Burst Alert Telescope Survey of Active Galactic Nucleus and the Anisotropy of Nearby AGNs

    Science.gov (United States)

    Ajello, M.; Alexander, D. M.; Greiner, J.; Madejeski, G. M.; Gehrels, N.; Burlon, D.

    2014-01-01

    Surveys above 10 keV represent one of the best resources to provide an unbiased census of the population of active galactic nuclei (AGNs). We present the results of 60 months of observation of the hard X-ray sky with Swift/Burst Alert Telescope (BAT). In this time frame, BAT-detected (in the 15-55 keV band) 720 sources in an all-sky survey of which 428 are associated with AGNs, most of which are nearby. Our sample has negligible incompleteness and statistics a factor of approx. 2 larger over similarly complete sets of AGNs. Our sample contains (at least) 15 bona fide Compton-thick AGNs and 3 likely candidates. Compton-thick AGNs represent approx. 5% of AGN samples detected above 15 keV. We use the BAT data set to refine the determination of the log N-log S of AGNs which is extremely important, now that NuSTAR prepares for launch, toward assessing the AGN contribution to the cosmic X-ray background. We show that the log N-log S of AGNs selected above 10 keV is now established to approx. 10% precision. We derive the luminosity function of Compton-thick AGNs and measure a space density of 7.9(+4.1/-2.9)× 10(exp -5)/cubic Mpc for objects with a de-absorbed luminosity larger than 2 × 10(exp 42) erg / s. As the BAT AGNs are all mostly local, they allow us to investigate the spatial distribution of AGNs in the nearby universe regardless of absorption. We find concentrations of AGNs that coincide spatially with the largest congregations of matter in the local (much < 85 Mpc) universe. There is some evidence that the fraction of Seyfert 2 objects is larger than average in the direction of these dense regions..

  2. Advertisements for medicines in leading medical journals in 18 countries: a 12-month survey of information content and standards.

    Science.gov (United States)

    Herxheimer, A; Lundborg, C S; Westerholm, B

    1993-01-01

    The information content of 6,710 advertisements for medicines in medical journals was surveyed to provide a baseline for monitoring the effect of WHO's Ethical Criteria for Medicinal Drug Promotion. The advertisements (ads) appeared during 12 months (1987-1988) in 23 leading national medical journals in 18 countries. Local participants, mostly doctors or pharmacists, examined them. The presence or absence in each ad of important information was noted. In most ads the generic name appeared in smaller type than the brand name. Indications were mentioned more often than the negative effects of medicines. The ads gave less pharmacological than medical information. However, important warnings and precautions were missing in half, and side effects and contraindications in about 40 percent. Prices tended to be given only in countries where a social security system pays for the medicines. The information content of ads in the developing countries differed surprisingly little from that in the industrialized countries. Almost all the ads (96 percent) included one or more pictures; 58 percent of these were considered irrelevant. The authors believe it is a mistake to regard ads as trivial. If they are not considered seriously they will influence the use of medicines as they are intended to do, but read critically they can provide useful information.

  3. Algorithmic synthesis using Python compiler

    Science.gov (United States)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  4. Theory and practice of compilation

    CERN Document Server

    Langmaack, H

    1972-01-01

    Compilation is the translation of high level language programs into machine code. Correct translation can only be achieved if syntax and semantics of programming languages are clearly defined and strictly obeyed by compiler constructors. The author presents a simple extendable scheme for syntax and semantics to be defined rigorously. This scheme fits many programming languages, especially ALGOL-like ones. The author considers statements and programs to be notations of state transformations; in special cases storage state transformations. (5 refs).

  5. Mental health and resiliency following 44 months of terrorism: a survey of an Israeli national representative sample

    Directory of Open Access Journals (Sweden)

    Melamed Yuval

    2006-08-01

    Full Text Available Abstract Background Israeli citizens have been exposed to intense and ongoing terrorism since September 2000. We previously studied the mental health impact of terrorism on the Israeli population (Bleich et al., 2002, however the long-term impact of ongoing terrorism has not yet been examined. The present study evaluated the psychological sequelae of 44 months of terrorism in Israel, and sought to identify factors that may contribute to vulnerability and resilience. Methods This was a telephone survey using strata sampling of 828 households, which reached a representative sample of 702 adult Israeli residents (84.8% contact rate. In total, 501 people (60.5% agreed to participate. The methodology was similar to that of our previous study. Exposure to terrorism and other traumatic events, number of traumatic stress-related symptoms (TSRS, percentage of respondents with symptom criteria for post-traumatic stress disorder (PTSD, traumatic stress (TS resiliency and feelings of depression, anxiety, optimism, sense of safety, and help-seeking were the main outcome measures. Results In total, 56 participants (11.2% were directly exposed to a terrorist incident, and 101 (20.2% had family members or friends exposed. Respondents reported a mean ± SD of 5.0 ± 4.5 TSRS; 45 (9% met symptom criteria for PTSD; and 72 (14.4% were TS-resilient. There were 147 participants (29.5% who felt depressed, 50 (10.4% felt anxious, and almost half (235; 47% felt life-threatening danger; 48 (9.7% felt the need for professional help. Women and people of Arab ethnicity had more TSRS, more PTSD, and less TS resiliency. Injury following a life-threatening experience, a major stressful life event, and a major loss of income were associated with PTSD. Immigrant status, lower education, low sense of safety, low sense of social support, high societal distress, and injury following life-threatening experiences were associated with TSRS. TSRS did not increase with exposure severity

  6. Certifying cost annotations in compilers

    CERN Document Server

    Amadio, Roberto M; Régis-Gianas, Yann; Saillard, Ronan

    2010-01-01

    We discuss the problem of building a compiler which can lift in a provably correct way pieces of information on the execution cost of the object code to cost annotations on the source code. To this end, we need a clear and flexible picture of: (i) the meaning of cost annotations, (ii) the method to prove them sound and precise, and (iii) the way such proofs can be composed. We propose a so-called labelling approach to these three questions. As a first step, we examine its application to a toy compiler. This formal study suggests that the labelling approach has good compositionality and scalability properties. In order to provide further evidence for this claim, we report our successful experience in implementing and testing the labelling approach on top of a prototype compiler written in OCAML for (a large fragment of) the C language.

  7. Advanced C and C++ compiling

    CERN Document Server

    Stevanovic, Milan

    2014-01-01

    Learning how to write C/C++ code is only the first step. To be a serious programmer, you need to understand the structure and purpose of the binary files produced by the compiler: object files, static libraries, shared libraries, and, of course, executables.Advanced C and C++ Compiling explains the build process in detail and shows how to integrate code from other developers in the form of deployed libraries as well as how to resolve issues and potential mismatches between your own and external code trees.With the proliferation of open source, understanding these issues is increasingly the res

  8. A compiler for variational forms

    CERN Document Server

    Kirby, Robert C; 10.1145/1163641.1163644

    2011-01-01

    As a key step towards a complete automation of the finite element method, we present a new algorithm for automatic and efficient evaluation of multilinear variational forms. The algorithm has been implemented in the form of a compiler, the FEniCS Form Compiler FFC. We present benchmark results for a series of standard variational forms, including the incompressible Navier-Stokes equations and linear elasticity. The speedup compared to the standard quadrature-based approach is impressive; in some cases the speedup is as large as a factor 1000.

  9. Compiler validates units and dimensions

    Science.gov (United States)

    Levine, F. E.

    1980-01-01

    Software added to compiler for automated test system for Space Shuttle decreases computer run errors by providing offline validation of engineering units used system command programs. Validation procedures are general, though originally written for GOAL, a free-form language that accepts "English-like" statements, and may be adapted to other programming languages.

  10. 1988 Bulletin compilation and index

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1989-02-01

    This document is published to provide current information about the national program for managing spent fuel and high-level radioactive waste. This document is a compilation of issues from the 1988 calendar year. A table of contents and one index have been provided to assist in finding information.

  11. Generating a Pattern Matching Compiler by Partial Evaluation

    DEFF Research Database (Denmark)

    Jørgensen, Knud Jesper

    1991-01-01

    Datalogi, partial Evaluation, Compiling, denotational Semantics, Pattern Matching, Semantic directed Compiler Generation......Datalogi, partial Evaluation, Compiling, denotational Semantics, Pattern Matching, Semantic directed Compiler Generation...

  12. Beverage consumption among U.S. children aged 0–24 months: National Health and Nutrition Examination Survey (NHANES)

    Science.gov (United States)

    Data on beverage consumption patterns in early life are limited. The aim of this study was to describe beverage consumption by sociodemographic characteristics, along with water intake and sources of water among U.S. children aged 0–24 months. Data from 2740 children in the 2005–2012 NHANES were ana...

  13. HAL/S-FC compiler system specifications

    Science.gov (United States)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  14. Beverage Consumption among U.S. Children Aged 0–24 Months: National Health and Nutrition Examination Survey (NHANES)

    Science.gov (United States)

    Grimes, Carley A.; Szymlek-Gay, Ewa A.; Nicklas, Theresa A.

    2017-01-01

    Data on beverage consumption patterns in early life are limited. The aim of this study was to describe beverage consumption by sociodemographic characteristics, along with water intake and sources of water among U.S. children aged 0–24 months. Data from 2740 children in the 2005–2012 NHANES were analysed. Food intake was determined via one 24-h dietary recall. Beverages were categorised according to What We Eat In America groups. Poverty–Income ratio was used to define household income. During infancy (0–5.9 months and 6–11.9 months) infant formulas were the most commonly consumed beverage, 74.1% and 78.6% of children consuming, respectively. Comparatively fewer children, 41.6% and 24.3%, consumed breast milk. In toddlers (12–24 months), the most commonly consumed beverages were plain milk (83.6% of children consuming), water (68.6%), 100% fruit juice (51.8%) and sweetened beverages (31.2%). Non-Hispanic black and Mexican-American children were more likely to consume sweetened beverages, 100% fruit juice and infant formula than Non-Hispanic white children. Children from lower income households were more likely to consume sweetened beverages and 100% fruit juice and less likely to consume breast milk than children from higher income households. Total water intake increased with age and the contribution of water from food and beverage sources was ~20% and ~80% for all children, respectively. Disparities in beverage consumption by race/ethnicity and income level are apparent in early life. PMID:28335374

  15. Beverage Consumption among U.S. Children Aged 0–24 Months: National Health and Nutrition Examination Survey (NHANES

    Directory of Open Access Journals (Sweden)

    Carley A. Grimes

    2017-03-01

    Full Text Available Data on beverage consumption patterns in early life are limited. The aim of this study was to describe beverage consumption by sociodemographic characteristics, along with water intake and sources of water among U.S. children aged 0–24 months. Data from 2740 children in the 2005–2012 NHANES were analysed. Food intake was determined via one 24-h dietary recall. Beverages were categorised according to What We Eat In America groups. Poverty–Income ratio was used to define household income. During infancy (0–5.9 months and 6–11.9 months infant formulas were the most commonly consumed beverage, 74.1% and 78.6% of children consuming, respectively. Comparatively fewer children, 41.6% and 24.3%, consumed breast milk. In toddlers (12–24 months, the most commonly consumed beverages were plain milk (83.6% of children consuming, water (68.6%, 100% fruit juice (51.8% and sweetened beverages (31.2%. Non-Hispanic black and Mexican-American children were more likely to consume sweetened beverages, 100% fruit juice and infant formula than Non-Hispanic white children. Children from lower income households were more likely to consume sweetened beverages and 100% fruit juice and less likely to consume breast milk than children from higher income households. Total water intake increased with age and the contribution of water from food and beverage sources was ~20% and ~80% for all children, respectively. Disparities in beverage consumption by race/ethnicity and income level are apparent in early life.

  16. Verified Separate Compilation for C

    Science.gov (United States)

    2015-06-01

    independent linking, a new operational model of multilanguage module interaction that supports the statement and proof of cross-language contextual...Compiling Open Programs A presumption of the preceding is that we at least have a specification of multilanguage programs. By multilanguage , I mean...Ahmed [PA14] have also observed, multilanguage semantics is useful not only for program understanding, but also as a mechanism for stating cross

  17. Explanatory Notes to Standard Compilation

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Ⅰ. Basis for Standard Compilation The economic globalization and China's rapid expansion of foreign exchanges have drastically boosted the demand for translation services. As a result, enterprises offering translation services mushroomed and formed a new industry unlike any other service industry. Though the output value of translation services is not high at the moment, their level and quality have a great impact on the clients because they cover the foreign intercourses in various fields and the construction of major foreign-invested projects.

  18. Compilation of HPSG to TAG

    CERN Document Server

    Kasper, R; Netter, K; Vijay-Shanker, K; Kasper, Robert; Kiefer, Bernd; Netter, Klaus

    1995-01-01

    We present an implemented compilation algorithm that translates HPSG into lexicalized feature-based TAG, relating concepts of the two theories. While HPSG has a more elaborated principle-based theory of possible phrase structures, TAG provides the means to represent lexicalized structures more explicitly. Our objectives are met by giving clear definitions that determine the projection of structures from the lexicon, and identify maximal projections, auxiliary trees and foot nodes.

  19. VFC: The Vienna Fortran Compiler

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    1999-01-01

    Full Text Available High Performance Fortran (HPF offers an attractive high‐level language interface for programming scalable parallel architectures providing the user with directives for the specification of data distribution and delegating to the compiler the task of generating an explicitly parallel program. Available HPF compilers can handle regular codes quite efficiently, but dramatic performance losses may be encountered for applications which are based on highly irregular, dynamically changing data structures and access patterns. In this paper we introduce the Vienna Fortran Compiler (VFC, a new source‐to‐source parallelization system for HPF+, an optimized version of HPF, which addresses the requirements of irregular applications. In addition to extended data distribution and work distribution mechanisms, HPF+ provides the user with language features for specifying certain information that decisively influence a program’s performance. This comprises data locality assertions, non‐local access specifications and the possibility of reusing runtime‐generated communication schedules of irregular loops. Performance measurements of kernels from advanced applications demonstrate that with a high‐level data parallel language such as HPF+ a performance close to hand‐written message‐passing programs can be achieved even for highly irregular codes.

  20. An Action Compiler Targeting Standard ML

    DEFF Research Database (Denmark)

    Iversen, Jørgen

    2005-01-01

    We present an action compiler that can be used in connection with an action semantics based compiler generator. Our action compiler produces code with faster execution times than code produced by other action compilers, and for some non-trivial test examples it is only a factor two slower than th...... the code produced by the Gnu C Compiler. Targeting Standard ML makes the description of the code generation simple and easy to implement. The action compiler has been tested on a description of the Core of Standard ML and a subset of C....

  1. A ten-month diseases survey on wild Litopenaeus setiferus (Decapoda: Penaeidae from Southern Gulf of Mexico

    Directory of Open Access Journals (Sweden)

    Rodolfo Enrique del Río-Rodríguez

    2013-09-01

    Full Text Available The development of shrimp aquaculture in Mexican coasts of the Gulf of Mexico began to be explored using the Pacific white shrimp Litopenaeus vannamei in the mid 90´s. Many concerns over the risk of disease transmission to the economically important native penaeids, have been the main deterrent for the aquaculture of L. vannamei in the region. Concurrently, more than 10 years of research experience on the aquaculture suitability of the native Litopenaeus setiferus from the Terminos Lagoon, in the Yucatán Peninsula, have been accumulated. The aim of this study was then to determine the seasonal variations of the naturally acquired diseases and the possible detection of exotic pathogens. For this, random subsamples (n~60 of juveniles L. setiferus were collected from monthly captures. In order to detect the widest range of pathogens, including infectious hypodermal and hematopoietic necrosis (IHHNv and white spot syndrome (WSSv viruses, both histopathological and molecular methods were employed. Monthly prevalence (% was calculated for every finding. We were able to detect a total of 16 distinct histological anomalies, most of which the presump- tive aetiological agent was readily identified. PCR results for viruses were negative. For some pathogens and symbionts, the prevalence was significantly different between the adult and juvenile populations. Prevalence of diseases tended to be higher in juvenile shrimp than in adults. The results of this study indicated that L. setiferus carry a wide variety of pathogens and symbionts that seem to be endemic to penaeids of the Gulf of Mexico, and those juveniles were more conspicuous to acquire pathogens and symbionts than adults.

  2. Fault-Tree Compiler Program

    Science.gov (United States)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  3. Recommendations for a Retargetable Compiler.

    Science.gov (United States)

    1980-03-01

    reloasables to the genor" pubic, R4ludlgw.u te 106-7R-79-331 hot beea reviewed and U* approved for pb1h*s~~I. APPROVED:j. SAMUE A. DI NITTO , JR. Project...1 b.t,.cf ente~red 17ll1k 20. it ditI.,OI Ito Report) Same ff I IS SUPPLEMENTARY NOTES RADC Project Engineer: Samuel A. Di Nitto , Jr. (ISIS) 2...compiler for Ada can commence development in FY82. SAMUEL A. DI NITTO , JR Project Engineer viii 1. INTRODUCTION In this section, we discuss the current

  4. Lattice Simulations using OpenACC compilers

    CERN Document Server

    Majumdar, Pushan

    2013-01-01

    OpenACC compilers allow one to use Graphics Processing Units without having to write explicit CUDA codes. Programs can be modified incrementally using OpenMP like directives which causes the compiler to generate CUDA kernels to be run on the GPUs. In this article we look at the performance gain in lattice simulations with dynamical fermions using OpenACC compilers.

  5. Distributed memory compiler design for sparse problems

    Science.gov (United States)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  6. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  7. The fault-tree compiler

    Science.gov (United States)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  8. Geology, Bedrock, Bedrock geologic map compilation of the west half of the Asheville 1:100,000 scale map., Published in 2006, 1:100000 (1in=8333ft) scale, NC DENR / Div. of Land Resources / Geological Survey Section.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Geology, Bedrock dataset, published at 1:100000 (1in=8333ft) scale, was produced all or in part from Field Survey/GPS information as of 2006. It is described...

  9. Bathymetric Terrain Model of the U.S. Atlantic Margin (100-meter resolution) compiled by the U.S. Geological Survey (32-bit GeoTIFF, MERCATOR Projection, WGS 84)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing technologies...

  10. Bathymetric Terrain Model of the Puerto Rico Trench and Northeastern Caribbean Region Compiled by the U.S. Geological Survey From Multibeam Bathymetric Data Collected Between 2002 and 2013 (PRBATHOFR150, Esri Binary Grid, UTM19, WGS 84).

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models (BTMs) of seafloor morphology are an important component of marine geological investigations. Advances in technologies of acquiring and...

  11. Bathymetric Terrain Model of the U.S. Atlantic Margin (100-meter resolution) compiled by the U.S. Geological Survey (32-bit GeoTIFF, MERCATOR Projection, WGS 84)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing technologies...

  12. Bathymetric Terrain Model of the U.S. Atlantic Margin (100-meter resolution) compiled by the U.S. Geological Survey (32-bit GeoTIFF, MERCATOR Projection, WGS 84)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models of seafloor morphology are an important component of marine geological investigations. Advances in acquisition and processing...

  13. Bathymetric Terrain Model of the Puerto Rico Trench and Northeastern Caribbean Region Compiled by the U.S. Geological Survey From Multibeam Bathymetric Data Collected Between 2002 and 2013 (PRBATHOFR150, Esri Binary Grid, UTM19, WGS 84).

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric terrain models (BTMs) of seafloor morphology are an important component of marine geological investigations. Advances in technologies of acquiring and...

  14. A Symmetric Approach to Compilation and Decompilation

    DEFF Research Database (Denmark)

    Ager, Mads Sig; Danvy, Olivier; Goldberg, Mayer

    2002-01-01

    Just as an interpreter for a source language can be turned into a compiler from the source language to a target language, we observe that an interpreter for a target language can be turned into a compiler from the target language to a source language. In both cases, the key issue is the choice...... and the SECD-machine language. In each case, we prove that the target-to-source compiler is a left inverse of the source-to-target compiler, i.e., that it is a decompiler. In the context of partial evaluation, the binding-time shift of going from a source interpreter to a compiler is classically referred...... to as a Futamura projection. By symmetry, it seems logical to refer to the binding-time shift of going from a target interpreter to a compiler as a Futamura embedding....

  15. Compiling scheme using abstract state machines

    OpenAIRE

    2003-01-01

    The project investigates the use of Abstract State Machine in the process of computer program compilation. Compilation is to produce machine-code from a source program written in a high-level language. A compiler is a program written for the purpose. Machine-code is the computer-readable representation of sequences of computer instructions. An Abstract State Machine (ASM) is a notional computing machine, developed by Yuri Gurevich, for accurately and easily representing the semantics of...

  16. Compiler-assisted static checkpoint insertion

    Science.gov (United States)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  17. A Class-Specific Optimizing Compiler

    Directory of Open Access Journals (Sweden)

    Michael D. Sharp

    1993-01-01

    Full Text Available Class-specific optimizations are compiler optimizations specified by the class implementor to the compiler. They allow the compiler to take advantage of the semantics of the particular class so as to produce better code. Optimizations of interest include the strength reduction of class:: array address calculations, elimination of large temporaries, and the placement of asynchronous send/recv calls so as to achieve computation/communication overlap. We will outline our progress towards the implementation of a C++ compiler capable of incorporating class-specific optimizations.

  18. Workflow with pitfalls to derive a regional airborne magnetic compilation

    Science.gov (United States)

    Brönner, Marco; Baykiev, Eldar; Ebbing, Jörg

    2017-04-01

    Today, large scale magnetic maps are usually a patchwork of different airborne surveys from different size, different resolution and different years. Airborne magnetic acquisition is a fast and economic method to map and gain geological and tectonic information for large areas, onshore and offshore. Depending on the aim of a survey, acquisition parameters like altitude and profile distance are usually adjusted to match the purpose of investigation. The subsequent data processing commonly follows a standardized workflow comprising core-field subtraction and line leveling to yield a coherent crustal field magnetic grid for a survey area. The resulting data makes it possible to correlate with geological and tectonic features in the subsurface, which is of importance for e.g. oil and mineral exploration. Crustal scale magnetic interpretation and modeling demand regional compilation of magnetic data and the merger of adjacent magnetic surveys. These studies not only focus on shallower sources, reflected by short to intermediate magnetic wavelength anomalies, but also have a particular interest in the long wavelength deriving from deep seated sources. However, whilst the workflow to produce such a merger is supported by quite a few powerful routines, the resulting compilation contains several pitfalls and limitations, which were discussed before, but still are very little recognized. The maximum wavelength that can be resolved of each individual survey is directly related to the survey size and consequently a merger will contribute erroneous long-wavelength components in the magnetic data compilation. To minimize this problem and to homogenous the longer wavelengths, a first order approach is the combination of airborne and satellite magnetic data commonly combined with the compilation from airborne data, which is sufficient only under particular preconditions. A more advanced approach considers the gap in frequencies between airborne and satellite data, which motivated

  19. Proving Correctness of Compilers Using Structured Graphs

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2014-01-01

    We present an approach to compiler implementation using Oliveira and Cook’s structured graphs that avoids the use of explicit jumps in the generated code. The advantage of our method is that it takes the implementation of a compiler using a tree type along with its correctness proof and turns it ...

  20. Criteria for Evaluating the Performance of Compilers

    Science.gov (United States)

    1974-10-01

    skilled programmer to take advantage of all of the environmental special features which could be exploited by a compiler. These programs are then...id efl !,i% programs, except remove all statement labels. Subtract the ba-c; 162 values obtained by compiling and running a program cont.ziing the

  1. The Molen compiler for reconfigurable architectures

    NARCIS (Netherlands)

    Moscu Panainte, E.

    2007-01-01

    In this dissertation, we present the Molen compiler framework that targets reconfigurable architectures under the Molen Programming Paradigm. More specifically, we introduce a set of compiler optimizations that address one of the main shortcomings of the reconfigurable architectures, namely the reco

  2. Compiler Driven Code Comments and Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Karlsson, Sven

    2011-01-01

    Helping programmers write parallel software is an urgent problem given the popularity of multi-core architectures. Engineering compilers which automatically parallelize and vectorize code has turned out to be very challenging. Consequently, compilers are very selective with respect to the coding...... patterns they can optimize. We present an interactive approach and a tool set which leverages ad- vanced compiler analysis and optimizations while retaining programmer control over the source code and its transformation. This allows opti- mization even when programmers refrain from enabling optimizations...... to preserve accurate debug information or to avoid bugs in the compiler. It also allows the source code to carry optimizations from one compiler to another. Secondly, our tool-set provides feedback on why optimizations do not apply to a code fragment and suggests workarounds which can be applied automatically...

  3. Prevalence and Risk Factors of Overweight and Obesity among Children Aged 6–59 Months in Cameroon: A Multistage, Stratified Cluster Sampling Nationwide Survey

    Science.gov (United States)

    Tchoubi, Sébastien; Sobngwi-Tambekou, Joëlle; Noubiap, Jean Jacques N.; Asangbeh, Serra Lem; Nkoum, Benjamin Alexandre; Sobngwi, Eugene

    2015-01-01

    Background Childhood obesity is one of the most serious public health challenges of the 21st century. The prevalence of overweight and obesity among children (obesity among children aged 6 months to 5 years in Cameroon in 2011. Methods Four thousand five hundred and eighteen children (2205 boys and 2313 girls) aged between 6 to 59 months were sampled in the 2011 Demographic Health Survey (DHS) database. Body Mass Index (BMI) z-scores based on WHO 2006 reference population was chosen to estimate overweight (BMI z-score > 2) and obesity (BMI for age > 3). Regression analyses were performed to investigate risk factors of overweight/obesity. Results The prevalence of overweight and obesity was 8% (1.7% for obesity alone). Boys were more affected by overweight than girls with a prevalence of 9.7% and 6.4% respectively. The highest prevalence of overweight was observed in the Grassfield area (including people living in West and North-West regions) (15.3%). Factors that were independently associated with overweight and obesity included: having overweight mother (adjusted odds ratio (aOR) = 1.51; 95% CI 1.15 to 1.97) and obese mother (aOR = 2.19; 95% CI = 155 to 3.07), compared to having normal weight mother; high birth weight (aOR = 1.69; 95% CI 1.24 to 2.28) compared to normal birth weight; male gender (aOR = 1.56; 95% CI 1.24 to 1.95); low birth rank (aOR = 1.35; 95% CI 1.06 to 1.72); being aged between 13–24 months (aOR = 1.81; 95% CI = 1.21 to 2.66) and 25–36 months (aOR = 2.79; 95% CI 1.93 to 4.13) compared to being aged 45 to 49 months; living in the grassfield area (aOR = 2.65; 95% CI = 1.87 to 3.79) compared to living in Forest area. Muslim appeared as a protective factor (aOR = 0.67; 95% CI 0.46 to 0.95).compared to Christian religion. Conclusion This study underlines a high prevalence of early childhood overweight with significant disparities between ecological areas of Cameroon. Risk factors of overweight included high maternal BMI, high birth weight, male

  4. Vaccination coverage levels among Alaska Native children aged 19-35 months--National Immunization Survey, United States, 2000-2001.

    Science.gov (United States)

    2003-08-01

    In 2000, a total of 118,846 persons indicated that their race/ethnicity was Alaska Native (AN), either alone or in combination with one or more other racial/ethnic groups. AN groups comprise 19% of the population of Alaska and 0.4% of the total U.S. population. The AN grouping includes Eskimos, Aleuts, and Alaska Indians (members of the Alaska Athabaskan, Tlingit, Haida, or other AN tribes). Eskimo represented the largest AN tribal grouping, followed by Tlingit/Haida, Alaska Athabascan, and Aleut. Vaccination coverage levels among AN children have not been reported previously. This report presents data from the National Immunization Survey (NIS) for 2000-2001, which indicate that vaccination coverage levels among AN children aged 19-35 months exceeded the national health objective for 2010 (objective no. 14-22) for the majority of vaccines. This achievement indicates the effectiveness of using multiple strategies to increase vaccination coverage. Similar efforts might increase vaccination coverage in other rural regions with American Indian (AI)/AN populations.

  5. The Sloan Digital Sky Survey Reverberation Mapping Project: First Broad-line Hbeta and MgII Lags at z>~0.3 from six-Month Spectroscopy

    CERN Document Server

    Shen, Yue; Grier, C J; Peterson, Bradley M; Denney, Kelly D; Trump, Jonathan R; Sun, Mouyuan; Brandt, W N; Kochanek, Christopher S; Dawson, Kyle S; Green, Paul J; Greene, Jenny E; Hall, Patrick B; Ho, Luis C; Jiang, Linhua; Kinemuchi, Karen; McGreer, Ian D; Petitjean, Patrick; Richards, Gordon T; Schneider, Donald P; Strauss, Michael A; Tao, Charling; Wood-Vasey, W M; Zu, Ying; Pan, Kaike; Bizyaev, Dmitry; Ge, Jian; Oravetz, Daniel; Simmons, Audrey

    2015-01-01

    Reverberation mapping (RM) measurements of broad-line region (BLR) lags in z>0.3 quasars are important for directly measuring black hole masses in these distant objects, but so far there have been limited attempts and success given the practical difficulties of RM in this regime. Here we report preliminary results of 15 BLR lag measurements from the Sloan Digital Sky Survey Reverberation Mapping (SDSS-RM) project, a dedicated RM program with multi-object spectroscopy designed for RM over a wide redshift range. The lags are based on the 2014 spectroscopic light curves alone (32 epochs over 6 months) and focus on the Hbeta and MgII broad lines in the 100 lowest-redshift (z0.3 is not yet possible due to the limitations in our current lag sample and selection biases inherent to our program. Our results demonstrate the general feasibility and potential of multi-object RM for z>0.3 quasars, and motivate more intensive spectroscopic and photometric monitoring to derive high-quality lag measurements for these objects...

  6. Dietary Diversity and Meal Frequency Practices among Infant and Young Children Aged 6–23 Months in Ethiopia: A Secondary Analysis of Ethiopian Demographic and Health Survey 2011

    Directory of Open Access Journals (Sweden)

    Melkam Aemro

    2013-01-01

    Full Text Available Background. Appropriate complementary feeding practice is essential for growth and development of children. This study aimed to assess dietary diversity and meal frequency practice of infants and young children in Ethiopia. Methods. Data collected in the Ethiopian Demographic and Health Survey (EDHS from December 2010 to June 2011 were used for this study. Data collected were extracted, arranged, recoded, and analyzed by using SPSS version 17. A total of 2836 children aged 6–23 months were used for final analysis. Both bivariate and multivariate analysis were done to identify predictors of feeding practices. Result. Children with adequate dietary diversity score and meal frequency were 10.8% and 44.7%, respectively. Children born from the richest households showed better dietary diversity score (OR = 0.256. Number of children whose age less than five years was important predictor of dietary diversity (OR = 0.690. Mothers who had exposure to media were more likely to give adequate meal frequency to their children (OR = 0.707. Conclusion. Dietary diversity and meal frequency practices were inadequate in Ethiopia. Wealth quintile, exposure to media, and number of children were affecting feeding practices. Improving economic status, a habit of eating together, and exposure to media are important to improve infant feeding practices in Ethiopia.

  7. Compilation of gallium resource data for bauxite deposits

    Science.gov (United States)

    Schulte, Ruth F.; Foley, Nora K.

    2014-01-01

    Gallium (Ga) concentrations for bauxite deposits worldwide have been compiled from the literature to provide a basis for research regarding the occurrence and distribution of Ga worldwide, as well as between types of bauxite deposits. In addition, this report is an attempt to bring together reported Ga concentration data into one database to supplement ongoing U.S. Geological Survey studies of critical mineral resources. The compilation of Ga data consists of location, deposit size, bauxite type and host rock, development status, major oxide data, trace element (Ga) data and analytical method(s) used to derive the data, and tonnage values for deposits within bauxite provinces and districts worldwide. The range in Ga concentrations for bauxite deposits worldwide is

  8. Compiler Driven Code Comments and Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Karlsson, Sven

    2010-01-01

    patterns they can optimize. We present an interactive approach which leverages advanced compiler analysis and optimizations while retaining program- mer control over the source code and its transformation. This allows optimization even when programmers refrain from enabling optimizations to preserve...... accurate debug in- formation or to avoid bugs in the compiler. It also allows the source code to carry optimizations from one compiler to another. Secondly, our tool-set provides feedback on why optimizations do not apply to a code fragment and sug- gests workarounds which can be applied automatically. We...

  9. Python based high-level synthesis compiler

    Science.gov (United States)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  10. Determinants of stunting and severe stunting among Burundian children aged 6-23 months: evidence from a national cross-sectional household survey, 2014.

    Science.gov (United States)

    Nkurunziza, Sandra; Meessen, Bruno; Van Geertruyden, Jean-Pierre; Korachais, Catherine

    2017-07-25

    Burundi is one of the poorest countries and is among the four countries with the highest prevalence of stunting (58%) among children aged less than 5 years. This situation undermines the economic growth of the country as undernutrition is strongly associated with less schooling and reduced economic productivity. Identifying the determinants of stunting and severe stunting may help policy-makers to direct the limited Burundian resources to the most vulnerable segments of the population, and thus make it more cost effective. This study aimed to identify predictors of stunting and severe stunting among children aged less than two years in Burundi. The sample is made up of 6199 children aged 6 to 23 months with complete anthropometric measurements from the baseline survey of an impact evaluation study of the Performance-Based financing (PBF) scheme applied to nutrition services in Burundi from 2015 to 2017. Binary and multivariable logistic regression analyses were used to examine stunting and severe stunting against a set of child, parental and household variables such as child's age or breastfeeding pattern, mother's age or knowledge of malnutrition, household size or socio-economic status. The prevalence of stunting and severe stunting were 53% [95%CI: 51.8-54.3] and 20.9% [95%CI: 19.9-22.0] respectively. Compared to children from 6-11 months, children of 12-17 months and 18-23 months had a higher risk of stunting (AdjOR:2.1; 95% CI: 1.8-2.4 and 3.2; 95% CI: 2.8-3.7). Other predictors for stunting were small babies (AdjOR=1.5; 95% CI: 1.3-1.7 for medium-size babies at birth and AdjOR=2.9; 95% CI: 2.4-3.6 for small-size babies at birth) and male children (AdjOR=1.5, 95% CI: 1.4-1.8). In addition, having no education for mothers (AdjOR=1.6; 95% CI: 1.2-2.1), incorrect mothers' child nutrition status assessment (AdjOR=3.3; 95% CI: 2.8-4), delivering at home (AdjOR=1.4; 95% CI: 1.2-1.6) were found to be predictors for stunting. More than to 2 under five children in the

  11. A Compilation of Internship Reports - 2012

    Energy Technology Data Exchange (ETDEWEB)

    Stegman M.; Morris, M.; Blackburn, N.

    2012-08-08

    This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.

  12. Extension of Alvis compiler front-end

    Science.gov (United States)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr

    2015-12-01

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters' types and operations on them. Thanks to the compiler's modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  13. Gravity Data for Indiana (300 records compiled)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The gravity data (300 records) were compiled by Purdue University. This data base was received in February 1993. Principal gravity parameters include Free-air...

  14. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  15. Compilation of Pilot Cognitive Ability Norms

    Science.gov (United States)

    2011-12-01

    has amassed a body of knowledge about many topics .87 Comprehension (Comp) Measures “ social acculturation ,” “ social intelligence,” and the...AFRL-SA-WP-TR-2012-0001 COMPILATION OF PILOT COGNITIVE ABILITY NORMS Raymond E. King U.S Air Force School of Aerospace Medicine...2011 4. TITLE AND SUBTITLE Compilation of Pilot Cognitive Ability Norms 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  16. Verified Compilation of Floating-Point Computations

    OpenAIRE

    Boldo, Sylvie; Jourdan, Jacques-Henri; Leroy, Xavier; Melquiond, Guillaume

    2015-01-01

    International audience; Floating-point arithmetic is known to be tricky: roundings, formats, exceptional values. The IEEE-754 standard was a push towards straightening the field and made formal reasoning about floating-point computations easier and flourishing. Unfortunately, this is not sufficient to guarantee the final result of a program, as several other actors are involved: programming language, compiler, architecture. The CompCert formally-verified compiler provides a solution to this p...

  17. Compilation of Sandia Laboratories technical capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Lundergan, C. D.; Mead, P. L. [eds.

    1975-11-01

    This report is a compilation of 17 individual documents that together summarize the technical capabilities of Sandia Laboratories. Each document in this compilation contains details about a specific area of capability. Examples of application of the capability to research and development problems are provided. An eighteenth document summarizes the content of the other seventeen. Each of these documents was issued with a separate report number (SAND 74-0073A through SAND 74-0091, except -0078). (RWR)

  18. Electronic circuits for communications systems: A compilation

    Science.gov (United States)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  19. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    Science.gov (United States)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where

  20. The NuSTAR Serendipitous Survey: The 40-month Catalog and the Properties of the Distant High-energy X-Ray Source Population

    Science.gov (United States)

    Lansbury, G. B.; Stern, D.; Aird, J.; Alexander, D. M.; Fuentes, C.; Harrison, F. A.; Treister, E.; Bauer, F. E.; Tomsick, J. A.; Baloković, M.; Del Moro, A.; Gandhi, P.; Ajello, M.; Annuar, A.; Ballantyne, D. R.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Chen, C.-T. J.; Christensen, F. E.; Civano, F.; Comastri, A.; Craig, W. W.; Forster, K.; Grefenstette, B. W.; Hailey, C. J.; Hickox, R. C.; Jiang, B.; Jun, H. D.; Koss, M.; Marchesi, S.; Melo, A. D.; Mullaney, J. R.; Noirot, G.; Schulze, S.; Walton, D. J.; Zappacosta, L.; Zhang, W. W.

    2017-02-01

    We present the first full catalog and science results for the Nuclear Spectroscopic Telescope Array (NuSTAR) serendipitous survey. The catalog incorporates data taken during the first 40 months of NuSTAR operation, which provide ≈20 Ms of effective exposure time over 331 fields, with an areal coverage of 13 deg2, and 497 sources detected in total over the 3–24 keV energy range. There are 276 sources with spectroscopic redshifts and classifications, largely resulting from our extensive campaign of ground-based spectroscopic follow-up. We characterize the overall sample in terms of the X-ray, optical, and infrared source properties. The sample is primarily composed of active galactic nuclei (AGNs), detected over a large range in redshift from z = 0.002 to 3.4 (median of =0.56), but also includes 16 spectroscopically confirmed Galactic sources. There is a large range in X-ray flux, from {log}({f}3-24{keV}/{erg} {{{s}}}-1 {{cm}}-2)≈ -14 to ‑11, and in rest-frame 10–40 keV luminosity, from {log}({L}10-40{keV}/{erg} {{{s}}}-1)≈ 39 to 46, with a median of 44.1. Approximately 79% of the NuSTAR sources have lower-energy (population, from ≈15% at the highest luminosities ({L}{{X}}> {10}44 erg s‑1) to ≈80% at the lowest luminosities ({L}{{X}}sample. This is higher, albeit at a low significance level, than the type 2 fraction measured for redshift- and luminosity-matched AGNs selected by <10 keV X-ray missions.

  1. Impact of the Great East Japan Earthquake on feeding methods and newborn growth at 1 month postpartum: results from the Fukushima Health Management Survey

    Energy Technology Data Exchange (ETDEWEB)

    Kyozuka, Hyo; Yasuda, Shun; Kawamura, Makoto; Nomura, Yasuhisa; Fujimori, Keiya [Fukushima Medical University, Department of Obstetrics and Gynecology, School of Medicine, Fukushima (Japan); Goto, Aya; Yasumura, Seiji [Radiation Medical Science Center for the Fukushima Health Management Survey, Fukushima (Japan); Fukushima Medical University, Department of Public Health, School of Medicine, Fukushima (Japan); Abe, Masafumi [Radiation Medical Science Center for the Fukushima Health Management Survey, Fukushima (Japan)

    2016-05-15

    This study examined the effects of three disasters (the Great East Japan Earthquake of March 11, 2011, followed by a tsunami and the Fukushima Daiichi Nuclear Power Plant accident) on feeding methods and growth in infants born after the disasters. Using results from the Fukushima Health Management Survey, Soso District (the affected area where the damaged nuclear power plant is located) and Aizu District (a less-affected area located farthest from the plant) were compared. In this study, newborn and maternal background characteristics were examined, as well as feeding methods, and other factors for newborn growth at the first postpartum examination for 1706 newborns born after the disaster in the affected (n = 836) and less-affected (n = 870) areas. Postpartum examinations took place 1 month after birth. Feeding method trends were examined, and multivariate regression analyses were used to investigate effects on newborn mass gain. There were no significant differences in background characteristics among newborns in these areas. When birth dates were divided into four periods to assess trends, no significant change in the exclusive breastfeeding rate was found, while the exclusive formula-feeding rate was significantly different across time periods in the affected area (p = 0.02). Multivariate analyses revealed no significant independent associations of maternal depression and change in medical facilities (possible disaster effects) with other newborn growth factors in either area. No area differences in newborn growth at the first postpartum examination or in exclusive breastfeeding rates were found during any period. Exclusive formula-feeding rates varied across time periods in the affected, but not in the less-affected area. It is concluded that effective guidance to promote breast-feeding and prevent exclusive use of formula is important for women in post-disaster circumstances. (orig.)

  2. Nutritional status and dietary intakes of children aged 6 months to 12 years: findings of the Nutrition Survey of Malaysian Children (SEANUTS Malaysia).

    Science.gov (United States)

    Poh, Bee Koon; Ng, Boon Koon; Siti Haslinda, Mohd Din; Nik Shanita, Safii; Wong, Jyh Eiin; Budin, Siti Balkis; Ruzita, Abd Talib; Ng, Lai Oon; Khouw, Ilse; Norimah, A Karim

    2013-09-01

    The dual burden of malnutrition reportedly coexists in Malaysia; however, existing data are scarce and do not adequately represent the nutritional status of Malaysian children. The Nutrition Survey of Malaysian Children was carried out with the aim of assessing the nutritional status in a sample of nationally representative population of children aged 6 months to 12 years. A total of 3542 children were recruited using a stratified random sampling method. Anthropometric measurements included weight, height, mid-upper arm circumference, and waist and hip circumferences. Blood biochemical assessment involved analyses of Hb, serum ferritin, and vitamins A and D. Dietary intake was assessed using semi-quantitative FFQ, and nutrient intakes were compared with the Malaysian Recommended Nutrient Intakes (RNI). The prevalence of overweight (9·8%) and obesity (11·8%) was higher than that of thinness (5·4%) and stunting (8·4%). Only a small proportion of children had low levels of Hb (6·6%), serum ferritin (4·4%) and vitamin A (4·4%), but almost half the children (47·5%) had vitamin D insufficiency. Dietary intake of the children was not compatible with the recommendations, where more than one-third did not achieve the Malaysian RNI for energy, Ca and vitamin D. The present study revealed that overnutrition was more prevalent than undernutrition. The presence of high prevalence of vitamin D insufficiency and the inadequate intake of Ca and vitamin D are of concern. Hence, strategies for improving the nutritional status of Malaysian children need to consider both sides of malnutrition and also put emphasis on approaches for the prevention of overweight and obesity as well as vitamin D insufficiency.

  3. Monthly errors

    Data.gov (United States)

    U.S. Environmental Protection Agency — The 2006 monthly average statistical metrics for 2m Q (g kg-1) domain-wide for the base and MODIS WRF simulations against MADIS observations. This dataset is...

  4. Compilation of VS30 Data for the United States

    Science.gov (United States)

    Yong, Alan; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Odum, Jack K.; Stephenson, William J.; Haefner, Scott

    2016-01-01

    VS30, the time-averaged shear-wave velocity (VS) to a depth of 30 meters, is a key index adopted by the earthquake engineering community to account for seismic site conditions. VS30 is typically based on geophysical measurements of VS derived from invasive and noninvasive techniques at sites of interest. Owing to cost considerations, as well as logistical and environmental concerns, VS30 data are sparse or not readily available for most areas. Where data are available, VS30 values are often assembled in assorted formats that are accessible from disparate and (or) impermanent Web sites. To help remedy this situation, we compiled VS30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, we have compiled VS30 values for 2,997 sites in the United States, along with metadata for each measurement from government-sponsored reports, Web sites, and scientific and engineering journals. Most of the data in our VS30 compilation originated from publications directly reporting the work of field investigators. A small subset (less than 20 percent) of VS30 values was previously compiled by the USGS and other research institutions. Whenever possible, VS30 originating from these earlier compilations were crosschecked against published reports. Both downhole and surface-based VS30 estimates are represented in our VS30 compilation. Most of the VS30 data are for sites in the western contiguous United States (2,141 sites), whereas 786 VS30 values are for sites in the Central and Eastern United States; 70 values are for sites in other parts of the United States, including Alaska (15 sites), Hawaii (30 sites), and Puerto Rico (25 sites). An interactive map is hosted on the primary USGS Web site for accessing VS30 data (http://earthquake.usgs.gov/research/vs30/).

  5. Flexible IDL Compilation for Complex Communication Patterns

    Directory of Open Access Journals (Sweden)

    Eric Eide

    1999-01-01

    Full Text Available Distributed applications are complex by nature, so it is essential that there be effective software development tools to aid in the construction of these programs. Commonplace “middleware” tools, however, often impose a tradeoff between programmer productivity and application performance. For instance, many CORBA IDL compilers generate code that is too slow for high‐performance systems. More importantly, these compilers provide inadequate support for sophisticated patterns of communication. We believe that these problems can be overcome, thus making idl compilers and similar middleware tools useful for a broader range of systems. To this end we have implemented Flick, a flexible and optimizing IDL compiler, and are using it to produce specialized high‐performance code for complex distributed applications. Flick can produce specially “decomposed” stubs that encapsulate different aspects of communication in separate functions, thus providing application programmers with fine‐grain control over all messages. The design of our decomposed stubs was inspired by the requirements of a particular distributed application called Khazana, and in this paper we describe our experience to date in refitting Khazana with Flick‐generated stubs. We believe that the special idl compilation techniques developed for Khazana will be useful in other applications with similar communication requirements.

  6. A pointer logic and certifying compiler

    Institute of Scientific and Technical Information of China (English)

    CHEN Yiyun; GE Lin; HUA Baojian; LI Zhaopeng; LIU Cheng; WANG Zhifang

    2007-01-01

    Proof-Carrying Code brings two big challenges to the research field of programming languages.One is to seek more expressive logics or type systems to specify or reason about the properties of low-level or high-level programs.The other is to study the technology of certifying compilation in which the compiler generates proofs for programs with annotations.This paper presents our progress in the above two aspects.A pointer logic was designed for PointerC (a C-like programming language) in our research.As an extension of Hoare logic,our pointer logic expresses the change of pointer information for each statement in its inference rules to support program verification.Meanwhile,based on the ideas from CAP (Certified Assembly Programming) and SCAP (Stack-based Certified Assembly Programming),a reasoning framework was built to verify the properties of object code in a Hoare style.And a certifying compiler prototype for PointerC was implemented based on this framework.The main contribution of this paper is the design of the pointer logic and the implementation of the certifying compiler prototype.In our certifying compiler,the source language contains rich pointer types and operations and also supports dynamic storage allocation and deallocation.

  7. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities...... for these codes in a static, off-line compiler, we developed an interactive compilation feedback system that guides the programmer in iteratively modifying application source, thereby improving the compiler’s ability to generate loop-parallel code. We use this compilation system to modify two sequential...... benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  8. A small evaluation suite for Ada compilers

    Science.gov (United States)

    Wilke, Randy; Roy, Daniel M.

    1986-01-01

    After completing a small Ada pilot project (OCC simulator) for the Multi Satellite Operations Control Center (MSOCC) at Goddard last year, the use of Ada to develop OCCs was recommended. To help MSOCC transition toward Ada, a suite of about 100 evaluation programs was developed which can be used to assess Ada compilers. These programs compare the overall quality of the compilation system, compare the relative efficiencies of the compilers and the environments in which they work, and compare the size and execution speed of generated machine code. Another goal of the benchmark software was to provide MSOCC system developers with rough timing estimates for the purpose of predicting performance of future systems written in Ada.

  9. Proof-Carrying Code with Correct Compilers

    Science.gov (United States)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  10. Extension of Alvis compiler front-end

    Energy Technology Data Exchange (ETDEWEB)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl [AGH University of Science and Technology, Department of Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow (Poland)

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  11. Code Generation in the Columbia Esterel Compiler

    Directory of Open Access Journals (Sweden)

    Jia Zeng

    2007-02-01

    Full Text Available The synchronous language Esterel provides deterministic concurrency by adopting a semantics in which threads march in step with a global clock and communicate in a very disciplined way. Its expressive power comes at a cost, however: it is a difficult language to compile into machine code for standard von Neumann processors. The open-source Columbia Esterel Compiler is a research vehicle for experimenting with new code generation techniques for the language. Providing a front-end and a fairly generic concurrent intermediate representation, a variety of back-ends have been developed. We present three of the most mature ones, which are based on program dependence graphs, dynamic lists, and a virtual machine. After describing the very different algorithms used in each of these techniques, we present experimental results that compares twenty-four benchmarks generated by eight different compilation techniques running on seven different processors.

  12. New Approach to Develop a Bilingual Compiler

    Directory of Open Access Journals (Sweden)

    Shampa Banik

    2014-02-01

    Full Text Available This research work presents a development of a Bangla programming language along with its compiler with an aim to introduce the programming language to the beginner through mother tongue. The syntax and construction of the programming language has been kept similar to BASIC language by considering the fact that BASIC is very easier in terms of its syntax, which is reasonably applicable as an introductory language for new programmer. A compiler has been developed for the proposed programming language that compile the source code into an intermediate code which is optimized. We have developed our system in Java. Our software is an efficient translation engine which can translate English source code to Bangla source code. We have implemented the system with a lot of test cases to identify what aspects of the system best explain their relative performance.

  13. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    2006-01-01

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...... by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose PRIMULA--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  14. COMPILATION OF CURRENT HIGH ENERGY PHYSICS EXPERIMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.; Horne, C.P.; Hutchinson, M.S.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Addis, L.; Ward, C.E.W.; Baggett, N.; Goldschmidt-Clermong, Y.; Joos, P.; Gelfand, N.; Oyanagi, Y.; Grudtsin, S.N.; Ryabov, Yu.G.

    1981-05-01

    This is the fourth edition of our compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included.

  15. A 3-month at-home tube feeding in 118 bulimia nervosa patients: a one-year prospective survey in adult patients.

    Science.gov (United States)

    Daniel, Rigaud; Didier, Perrin; Hélène, Pennacchio

    2014-04-01

    To study the 1-yr follow-up of 118 bulimia nervosa (BN) patients after a 3-month at-home tube feeding (TF) in a prospective study. At-home TF lasted 3 months, including one month of exclusive TF (no food). All patients fulfilled 4 questionnaires (score of binge/purging episodes (BP), eating disorder inventory, anxiety, depression), before, at the 3-month TF point, and 6 and 12 months latter. The score of BP episodes dramatically decreased from 28.8 ± 15 (before TF) to 7.3 ± 5.4 at 3 months, as well as at 1 yr (15.1 ± 6.2). We also obtained a 50% decrease in Beck score (depression) and Hamilton score (anxiety). Curiously, there was no difference between the BP scores of the patients following psychotherapy and those who did not, despite lower scores for anxiety and depression. In conclusion, in bulimia nervosa patients having normal BMI and purging behavior, home-TF allow to obtain total withdrawal from bingeing/purging in at least 75% of the cases at short term (3 months) and in 25% of the patients at one year, whatever the patients have or have not psychotherapy. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  16. How to compile a curriculum vitae.

    Science.gov (United States)

    Fish, J

    The previous article in this series tackled the best way to apply for a job. Increasingly, employers request a curriculum vitae as part of the application process. This article aims to assist you in compiling a c.v. by discussing its essential components and content.

  17. Heat Transfer and Thermodynamics: a Compilation

    Science.gov (United States)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Studies include theories and mechanical considerations in the transfer of heat and the thermodynamic properties of matter and the causes and effects of certain interactions.

  18. Medical History: Compiling Your Medical Family Tree

    Science.gov (United States)

    ... history. Or, you can compile your family's health history on your computer or in a paper file. If you encounter reluctance from your family, consider these strategies: Share your ... have a family history of certain diseases or health conditions. Offer to ...

  19. Communications techniques and equipment: A compilation

    Science.gov (United States)

    1975-01-01

    This Compilation is devoted to equipment and techniques in the field of communications. It contains three sections. One section is on telemetry, including articles on radar and antennas. The second section describes techniques and equipment for coding and handling data. The third and final section includes descriptions of amplifiers, receivers, and other communications subsystems.

  20. SURVEY

    DEFF Research Database (Denmark)

    SURVEY er en udbredt metode og benyttes inden for bl.a. samfundsvidenskab, humaniora, psykologi og sundhedsforskning. Også uden for forskningsverdenen er der mange organisationer som f.eks. konsulentfirmaer og offentlige institutioner samt marketingsafdelinger i private virksomheder, der arbejder...... med surveys. Denne bog gennemgår alle surveyarbejdets faser og giver en praktisk indføring i: • design af undersøgelsen og udvælgelse af stikprøver, • formulering af spørgeskemaer samt indsamling og kodning af data, • metoder til at analysere resultaterne...

  1. Compilation of tRNA sequences.

    Science.gov (United States)

    Sprinzl, M; Grueter, F; Spelzhaus, A; Gauss, D H

    1980-01-11

    This compilation presents in a small space the tRNA sequences so far published. The numbering of tRNAPhe from yeast is used following the rules proposed by the participants of the Cold Spring Harbor Meeting on tRNA 1978 (1,2;Fig. 1). This numbering allows comparisons with the three dimensional structure of tRNAPhe. The secondary structure of tRNAs is indicated by specific underlining. In the primary structure a nucleoside followed by a nucleoside in brackets or a modification in brackets denotes that both types of nucleosides can occupy this position. Part of a sequence in brackets designates a piece of sequence not unambiguosly analyzed. Rare nucleosides are named according to the IUPACIUB rules (for complicated rare nucleosides and their identification see Table 1); those with lengthy names are given with the prefix x and specified in the footnotes. Footnotes are numbered according to the coordinates of the corresponding nucleoside and are indicated in the sequence by an asterisk. The references are restricted to the citation of the latest publication in those cases where several papers deal with one sequence. For additional information the reader is referred either to the original literature or to other tRNA sequence compilations (3-7). Mutant tRNAs are dealt with in a compilation by J. Celis (8). The compilers would welcome any information by the readers regarding missing material or erroneous presentation. On the basis of this numbering system computer printed compilations of tRNA sequences in a linear form and in cloverleaf form are in preparation.

  2. HAL/S-FC compiler system functional specification

    Science.gov (United States)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  3. Prevalence of 12-Month Alcohol Use, High-Risk Drinking, and DSM-IV Alcohol Use Disorder in the United States, 2001-2002 to 2012-2013: Results From the National Epidemiologic Survey on Alcohol and Related Conditions.

    Science.gov (United States)

    Grant, Bridget F; Chou, S Patricia; Saha, Tulshi D; Pickering, Roger P; Kerridge, Bradley T; Ruan, W June; Huang, Boji; Jung, Jeesun; Zhang, Haitao; Fan, Amy; Hasin, Deborah S

    2017-09-01

    Lack of current and comprehensive trend data derived from a uniform, reliable, and valid source on alcohol use, high-risk drinking, and DSM-IV alcohol use disorder (AUD) represents a major gap in public health information. To present nationally representative data on changes in the prevalences of 12-month alcohol use, 12-month high-risk drinking, 12-month DSM-IV AUD, 12-month DSM-IV AUD among 12-month alcohol users, and 12-month DSM-IV AUD among 12-month high-risk drinkers between 2001-2002 and 2012-2013. The study data were derived from face-to-face interviews conducted in 2 nationally representative surveys of US adults: the National Epidemiologic Survey on Alcohol and Related Conditions, with data collected from April 2001 to June 2002, and the National Epidemiologic Survey on Alcohol and Related Conditions III, with data collected from April 2012 to June 2013. Data were analyzed in November and December 2016. Twelve-month alcohol use, high-risk drinking, and DSM-IV AUD. The study sample included 43 093 participants in the National Epidemiologic Survey on Alcohol and Related Conditions and 36 309 participants in the National Epidemiologic Survey on Alcohol and Related Conditions III. Between 2001-2002 and 2012-2013, 12-month alcohol use, high-risk drinking, and DSM-IV AUD increased by 11.2%, 29.9%, and 49.4%, respectively, with alcohol use increasing from 65.4% (95% CI, 64.3%-66.6%) to 72.7% (95% CI, 71.4%-73.9%), high-risk drinking increasing from 9.7% (95% CI, 9.3%-10.2%) to 12.6% (95% CI, 12.0%-13.2%), and DSM-IV AUD increasing from 8.5% (95% CI, 8.0%-8.9%) to 12.7% (95% CI, 12.1%-13.3%). With few exceptions, increases in alcohol use, high-risk drinking, and DSM-IV AUD between 2001-2002 and 2012-2013 were also statistically significant across sociodemographic subgroups. Increases in all of these outcomes were greatest among women, older adults, racial/ethnic minorities, and individuals with lower educational level and family income. Increases were also

  4. Testing-Based Compiler Validation for Synchronous Languages

    Science.gov (United States)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  5. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    Energy Technology Data Exchange (ETDEWEB)

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  6. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    Energy Technology Data Exchange (ETDEWEB)

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  7. Process compilation methods for thin film devices

    Science.gov (United States)

    Zaman, Mohammed Hasanuz

    This doctoral thesis presents the development of a systematic method of automatic generation of fabrication processes (or process flows) for thin film devices starting from schematics of the device structures. This new top-down design methodology combines formal mathematical flow construction methods with a set of library-specific available resources to generate flows compatible with a particular laboratory. Because this methodology combines laboratory resource libraries with a logical description of thin film device structure and generates a set of sequential fabrication processing instructions, this procedure is referred to as process compilation, in analogy to the procedure used for compilation of computer programs. Basically, the method developed uses a partially ordered set (poset) representation of the final device structure which describes the order between its various components expressed in the form of a directed graph. Each of these components are essentially fabricated "one at a time" in a sequential fashion. If the directed graph is acyclic, the sequence in which these components are fabricated is determined from the poset linear extensions, and the component sequence is finally expanded into the corresponding process flow. This graph-theoretic process flow construction method is powerful enough to formally prove the existence and multiplicity of flows thus creating a design space {cal D} suitable for optimization. The cardinality Vert{cal D}Vert for a device with N components can be large with a worst case Vert{cal D}Vert≤(N-1)! yielding in general a combinatorial explosion of solutions. The number of solutions is hence controlled through a-priori estimates of Vert{cal D}Vert and condensation (i.e., reduction) of the device component graph. The mathematical method has been implemented in a set of algorithms that are parts of the software tool MISTIC (Michigan Synthesis Tools for Integrated Circuits). MISTIC is a planar process compiler that generates

  8. Compiling CIL Rewriting Language for Multiprocessors

    Institute of Scientific and Technical Information of China (English)

    田新民; 王鼎兴; 等

    1994-01-01

    The high-level Conpiler Intermediate Language CIL is a general-purpose description language of parallel graph rewriting computational model intended for paralled implementation of declarative languages on multiprocessor systems.In this paper,we first outline a new Hybrid Execution Model(HEM) and corresponding parallel abstract machine PAM/TGR based on extended parallel Graph Rewriting Computational Model EGRCM for implementing CIL language on distributed memory multiprocessor systems.Then we focus on the compiling CIL language with various optimizing techniques such as pattern matching,rule indexing,node ordering and compile-time partial scheduling.The experimental results on a 16-node transputer Array demonstrates the effectiveness of our model and strategies.

  9. Specialized Silicon Compilers for Language Recognition.

    Science.gov (United States)

    1984-07-01

    the circu its produced by a compiler can be vcrified by formal methods. Fzach primitive cell can be checked independently of the others. When all... primitive cell , each non-terminal corresponds to a more complex combination of cells, and each production corresponds to a construction rule. A...terminal symbol is reached during the parse, the corresponding primitive cell is added to the circuit. 14 "The following grammar for regular expressions is

  10. 1991 OCRWM bulletin compilation and index

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-05-01

    The OCRWM Bulletin is published by the Department of Energy, Office of Civilian Radioactive Waste Management, to provide current information about the national program for managing spent fuel and high-level radioactive waste. The document is a compilation of issues from the 1991 calendar year. A table of contents and an index have been provided to reference information contained in this year`s Bulletins.

  11. Integrating Parallelizing Compilation Technologies for SMP Clusters

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Feng; Li Chen; Yi-Ran Wang; Xiao-Mi An; Lin Ma; Chun-Lei Sang; Zhao-Qing Zhang

    2005-01-01

    In this paper, a source to source parallelizing complier system, AutoPar, is presentd. The system transforms FORTRAN programs to multi-level hybrid MPI/OpenMP parallel programs. Integrated parallel optimizing technologies are utilized extensively to derive an effective program decomposition in the whole program scope. Other features such as synchronization optimization and communication optimization improve the performance scalability of the generated parallel programs, from both intra-node and inter-node. The system makes great effort to boost automation of parallelization.Profiling feedback is used in performance estimation which is the basis of automatic program decomposition. Performance results for eight benchmarks in NPB1.0 from NAS on an SMP cluster are given, and the speedup is desirable. It is noticeable that in the experiment, at most one data distribution directive and a reduction directive are inserted by the user in BT/SP/LU. The compiler is based on ORC, Open Research Compiler. ORC is a powerful compiler infrastructure, with such features as robustness, flexibility and efficiency. Strong analysis capability and well-defined infrastructure of ORC make the system implementation quite fast.

  12. Does Reading to Infants Benefit Their Cognitive Development at 9-Months-Old? An Investigation Using a Large Birth Cohort Survey

    Science.gov (United States)

    Murray, Aisling; Egan, Suzanne M.

    2014-01-01

    This study uses a nationally representative sample of 9-month-old infants and their families from the Growing Up in Ireland (GUI) study to investigate if reading to infants is associated with higher scores on contemporaneous indicators of cognitive development independently of other language-based interactions between parent and infant, such as…

  13. Does Reading to Infants Benefit Their Cognitive Development at 9-Months-Old? An Investigation Using a Large Birth Cohort Survey

    Science.gov (United States)

    Murray, Aisling; Egan, Suzanne M.

    2014-01-01

    This study uses a nationally representative sample of 9-month-old infants and their families from the Growing Up in Ireland (GUI) study to investigate if reading to infants is associated with higher scores on contemporaneous indicators of cognitive development independently of other language-based interactions between parent and infant, such as…

  14. [Technical Representative - Monthly PMC Issues Reports : Rocky Mountain Arsenal NWR

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This collection of documents contains the Monthly PMC Issues reports for the Rocky Mountain Arsenal compiled by the Technical Representative. Reports in this...

  15. Training paediatric healthcare staff in recognising, understanding and managing conflict with patients and families: findings from a survey on immediate and 6-month impact.

    Science.gov (United States)

    Forbat, Liz; Simons, Jean; Sayer, Charlotte; Davies, Megan; Barclay, Sarah

    2017-03-01

    Conflict is a recognised component of healthcare. Disagreements about treatment protocols, treatment aims and poor communication are recognised warning signs. Conflict management strategies can be used to prevent escalation, but are not a routine component of clinical training. To report the findings from a novel training intervention, aimed at enabling paediatric staff to identify and understand the warning signs of conflict, and to implement conflict resolution strategies. Self-report measures were taken at baseline, immediately after the training and at 6 months. Questionnaires recorded quantitative and qualitative feedback on the experience of training, and the ability to recognise and de-escalate conflict. The training was provided in a tertiary teaching paediatric hospital in England over 18 months, commencing in June 2013. A 4-h training course on identifying, understanding and managing conflict was provided to staff. Baseline data were collected from all 711 staff trained, and 6-month follow-up data were collected for 313 of those staff (44%). The training was successful in equipping staff to recognise and de-escalate conflict. Six months after the training, 57% of respondents had experienced conflict, of whom 91% reported that the training had enabled them to de-escalate the conflict. Learning was retained at 6 months with staff more able than at baseline recognising conflict triggers (Fischer's exact test, p=0.001) and managing conflict situations (Pearson's χ(2) test, p=0.001). This training has the potential to reduce substantially the human and economic costs of conflicts for healthcare providers, healthcare staff, patients and relatives. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. A Performance Tuning Methodology with Compiler Support

    Directory of Open Access Journals (Sweden)

    Oscar Hernandez

    2008-01-01

    Full Text Available We have developed an environment, based upon robust, existing, open source software, for tuning applications written using MPI, OpenMP or both. The goal of this effort, which integrates the OpenUH compiler and several popular performance tools, is to increase user productivity by providing an automated, scalable performance measurement and optimization system. In this paper we describe our environment, show how these complementary tools can work together, and illustrate the synergies possible by exploiting their individual strengths and combined interactions. We also present a methodology for performance tuning that is enabled by this environment. One of the benefits of using compiler technology in this context is that it can direct the performance measurements to capture events at different levels of granularity and help assess their importance, which we have shown to significantly reduce the measurement overheads. The compiler can also help when attempting to understand the performance results: it can supply information on how a code was translated and whether optimizations were applied. Our methodology combines two performance views of the application to find bottlenecks. The first is a high level view that focuses on OpenMP/MPI performance problems such as synchronization cost and load imbalances; the second is a low level view that focuses on hardware counter analysis with derived metrics that assess the efficiency of the code. Our experiments have shown that our approach can significantly reduce overheads for both profiling and tracing to acceptable levels and limit the number of times the application needs to be run with selected hardware counters. In this paper, we demonstrate the workings of this methodology by illustrating its use with selected NAS Parallel Benchmarks and a cloud resolving code.

  17. Piping and tubing technology: A compilation

    Science.gov (United States)

    1971-01-01

    A compilation on the devices, techniques, and methods used in piping and tubing technology is presented. Data cover the following: (1) a number of fittings, couplings, and connectors that are useful in joining tubing and piping and various systems, (2) a family of devices used where flexibility and/or vibration damping are necessary, (3) a number of devices found useful in the regulation and control of fluid flow, and (4) shop hints to aid in maintenance and repair procedures such as cleaning, flaring, and swaging of tubes.

  18. HAL/S-360 compiler system specification

    Science.gov (United States)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  19. Compiling the First Monolingual Lusoga Dictionary

    Directory of Open Access Journals (Sweden)

    Minah Nabirye

    2011-10-01

    Full Text Available

    Abstract: In this research article a study is made of the approach followed to compile the first-ever monolingual dictionary for Lusoga. Lusoga is a Bantu language spoken in Uganda by slightly over two mil-lion people. Being an under-resourced language, the Lusoga orthography had to be designed, a grammar written, and a corpus built, before embarking on the compilation of the dictionary. This compilation was aimed at attaining an academic degree, hence requiring a rigorous research methodology. Firstly, the prevail-ing methods for compiling dictionaries were mainly practical and insufficient in explaining the theoretical linguistic basis for dictionary compilation. Since dictionaries are based on meaning, the theory of meaning was used to account for all linguistic data considered in dictionaries. However, meaning is considered at a very abstract level, far removed from the process of compiling dictionaries. Another theory, the theory of modularity, was used to bridge the gap between the theory of meaning and the compilation process. The modular theory explains how the different modules of a language contribute information to the different parts of the dictionary article or dictionary information in general. Secondly, the research also had to contend with the different approaches for analysing Bantu languages for Bantu and European audiences. A descrip-tion of the Bantu- and European-centred approaches to Bantu studies was undertaken in respect of (a the classification of Lusoga words, and (b the specification of their citations. As a result, Lusoga lexicography deviates from the prevailing Bantu classification and citation of nouns, adjectives and verbs in particular. The dictionary was tested on two separate occasions and all the feedback was considered in the compilation pro-cess. This article, then, gives an overall summary of all the steps involved in the compilation of the Eiwanika ly'Olusoga, i.e. the Monolingual Lusoga Dictionary

  20. Compiling ER Specifications into Declarative Programs

    CERN Document Server

    Braßel, Bernd; Muller, Marion

    2007-01-01

    This paper proposes an environment to support high-level database programming in a declarative programming language. In order to ensure safe database updates, all access and update operations related to the database are generated from high-level descriptions in the entity- relationship (ER) model. We propose a representation of ER diagrams in the declarative language Curry so that they can be constructed by various tools and then translated into this representation. Furthermore, we have implemented a compiler from this representation into a Curry program that provides access and update operations based on a high-level API for database programming.

  1. Real‑life cost and cost‑effectiveness for tiotropium 18 μg od monotherapy in moderate and severe COPD patients: a 48‑month survey

    Directory of Open Access Journals (Sweden)

    Massimiliano Povero

    2014-06-01

    Full Text Available BACKGROUND: Tiotropium monotherapy enables a significant minimization of morbidity in COPD. OBJECTIVE: to evaluate and compare cost and cost‑effectiveness of tiotropium monotherapy administrated for 24 months (18 μg od in mild‑to‑moderate and severe chronic obstructive pulmonary disease (COPD. METHODS: Clinical outcomes (days in hospital; visits in general ward; cycles of systemic steroids; cycles of antibiotics and maintenance therapy drugs were evaluated in two groups of patients corresponding to predicted FEV1 baseline values ≤ 50% (A and > 50% (B from the Italian NHS perspective. In order to perform cost‑effectiveness analysis, FEV1 value, available for each patient, was converted in SGRQ score using a published multivariate linear model. Utilities were then obtained through the Ståhl equation. RESULTS: The comparison between 24 months of standard therapy and subsequent 24‑month period of tiotropium monotherapy showed that hospitalization cost, which represents the driving treatment cost, drops from 77% to 69% (A and from 67% to 33% (B of the total cost. Differently, maintenance therapy cost increased but the amount was more than offset by the savings accruing from the shortening of hospitalization. Furthermore, cost‑effectiveness results revealed a mean savings of about 216 € (A and 961 € (B other than a mean gain of 0.07 QALY (A and 0.02 QALY (B. Dominance of tiotropium (calculated only within patients completing treatment course revealed that in almost 29% (A and 36% (B of subjects tiotropium strategy is dominant while only in 2% (A and 7% (B of cases is associated to costs increment and worsening on quality of life. The dominance was systematic in severe COPD. Statistical analyses confirm such trend. CONCLUSIONS: Results of the present study suggest that tiotropium used as unique treatment in COPD systematically consents significant costs savings together with positive effects on evaluated quality. These effects prove

  2. Compiling a Corpus for Teaching Medical Translation

    Directory of Open Access Journals (Sweden)

    Elizabeth de la Teja Bosch

    2014-04-01

    Full Text Available Background: medical translation has countless documentary sources; the major difficulty lies in knowing how to assess them. The corpus is the ideal tool to perform this activity in a rapid and reliable way, and to define the learning objectives based on text typology and oriented towards professional practice.Objective: to compile an electronic corpus that meets the requirements of the professional practice to perform specialized medical translation. Methods: a pedagogical research was conducted in the province of Cienfuegos. The units of analysis involved records from translators of the Provincial Medical Sciences Information Center and specialized translators in this field, who completed a questionnaire to accurately determine their information needs, conditioning the corpus design criteria. The analysis of a set of texts extracted from highly reputable sources led to the text selection and final compilation. Subsequently, the validation of the corpus as a documentary tool for teaching specialized medical translation was performed. Results: there was a concentration of translation assignments in the topics: malignant tumors, hypertension, heart disease, diabetes mellitus and pneumonias. The predominant text typologies were: evaluative and dissemination of current research, with plenty original articles and reviews. The text corpus design criteria were: unannotated, documented, specialized, monitor and comparable. Conclusions: the corpus is a useful tool to show the lexical, terminological, semantic, discursive and contextual particularities of biomedical communication. It allows defining learning objectives and translation problems. Key words: teaching; translating; medicine

  3. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  4. TUNE: Compiler-Directed Automatic Performance Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Mary [University of Utah

    2014-09-18

    This project has developed compiler-directed performance tuning technology targeting the Cray XT4 Jaguar system at Oak Ridge, which has multi-core Opteron nodes with SSE-3 SIMD extensions, and the Cray XE6 Hopper system at NERSC. To achieve this goal, we combined compiler technology for model-guided empirical optimization for memory hierarchies with SIMD code generation, which have been developed by the PIs over the past several years. We examined DOE Office of Science applications to identify performance bottlenecks and apply our system to computational kernels that operate on dense arrays. Our goal for this performance-tuning technology has been to yield hand-tuned levels of performance on DOE Office of Science computational kernels, while allowing application programmers to specify their computations at a high level without requiring manual optimization. Overall, we aim to make our technology for SIMD code generation and memory hierarchy optimization a crucial component of high-productivity Petaflops computing through a close collaboration with the scientists in national laboratories.

  5. Compilation of functional languages using flow graph analysis

    NARCIS (Netherlands)

    Hartel, Pieter H.; Glaser, Hugh; Wild, John M.

    1994-01-01

    A system based on the notion of a flow graph is used to specify formally and to implement a compiler for a lazy functional language. The compiler takes a simple functional language as input and generates C. The generated C program can then be compiled, and loaded with an extensive run-time system to

  6. Compilation of functional languages using flow graph analysis

    NARCIS (Netherlands)

    Hartel, Pieter H.; Glaser, Hugh; Wild, John M.

    A system based on the notion of a flow graph is used to specify formally and to implement a compiler for a lazy functional language. The compiler takes a simple functional language as input and generates C. The generated C program can then be compiled, and loaded with an extensive run-time system to

  7. Compton Thick AGN in the 70 Month Swift-BAT All-Sky Hard X-ray Survey: a Bayesian approach

    CERN Document Server

    Akylas, A; Ranalli, P; Gkiokas, E; Corral, A; Lanzuisi, G

    2016-01-01

    The 70-month Swift/BAT catalogue provides a sensitive view of the extragalactic X-ray sky at hard energies (>10 keV) containing about 800 Active Galactic Nuclei. We explore its content in heavily obscured, Compton-thick AGN by combining the BAT (14-195 keV) with the lower energy XRT (0.3-10 keV) data. We apply a Bayesian methodology using Markov chains to estimate the exact probability distribution of the column density for each source. We find 54 possible Compton-thick sources (with probability 3 to 100%) translating to a ~7% fraction of the AGN in our sample. We derive the first parametric luminosity function of Compton-thick AGN. The unabsorbed luminosity function can be represented by a double power-law with a break at $L_{\\star} 2 \\times 10^{42}$ $\\rm ergs~s^{-1}$ in the 20-40 keV band.

  8. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    Science.gov (United States)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  9. Twelve-months prevalence of mental disorders in the German Health Interview and Examination Survey for Adults - Mental Health Module (DEGS1-MH): a methodological addendum and correction.

    Science.gov (United States)

    Jacobi, Frank; Höfler, Michael; Strehle, Jens; Mack, Simon; Gerschler, Anja; Scholl, Lucie; Busch, Markus A; Hapke, Ulfert; Maske, Ulrike; Seiffert, Ingeburg; Gaebel, Wolfgang; Maier, Wolfgang; Wagner, Michael; Zielasek, Jürgen; Wittchen, Hans-Ulrich

    2015-12-01

    We recently published findings in this journal on the prevalence of mental disorders from the German Health Interview and Examination Survey for Adults Mental Health Module (DEGS1-MH). The DEGS1-MH paper was also meant to be the major reference publication for this large-scale German study program, allowing future users of the data set to understand how the study was conducted and analyzed. Thus, towards this goal highest standards regarding transparency, consistency and reproducibility should be applied. After publication, unfortunately, the need for an addendum and corrigendum became apparent due to changes in the eligible reference sample, and corresponding corrections of the imputed data. As a consequence the sample description, sample size and some prevalence data needed amendments. Additionally we identified a coding error in the algorithm for major depression that had a significant effect on the prevalence estimates of depression and associated conditions. This addendum and corrigendum highlights all changes and presents the corrected prevalence tables. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. The initial management of stable angina in Europe, from the Euro Heart Survey: a description of pharmacological management and revascularization strategies initiated within the first month of presentation to a cardiologist in the Euro Heart Survey of Stable Angina.

    NARCIS (Netherlands)

    Daly, C.A.; Clemens, F.; Lopez-Sendon, J.; Tavazzi, L.; Boersma, E.; Danchin, N.; Delahaye, F.; Gitt, A.; Julian, D.; Mulcahy, D.; Ruzyllo, W.; Thygesen, K.; Verheugt, F.W.A.; Fox, K.M.

    2005-01-01

    AIMS: In order to assess adherence to guidelines and international variability in management, the Euro Heart Survey of Newly Presenting Angina prospectively studied medical therapy, percutaneous coronary intervention (PCI), and surgery in patients with new-onset stable angina in Europe. METHODS AND

  11. Electric power monthly

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sandra R.; Johnson, Melvin; McClevey, Kenneth; Calopedis, Stephen; Bolden, Deborah

    1992-05-01

    The Electric Power Monthly is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the national, Census division, and State levels for net generation, fuel consumption, fuel stocks, quantity and quality of fuel, cost of fuel, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fuel are also displayed for the North American Electric Reliability Council (NERC) regions. Additionally, statistics by company and plant are published in the EPM on capability of new plants, new generation, fuel consumption, fuel stocks, quantity and quality of fuel, and cost of fuel.

  12. Parallelizing More Loops with Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    2012-01-01

    The performance of many parallel applications relies not on instruction-level parallelism but on loop-level parallelism. Unfortunately, automatic parallelization of loops is a fragile process; many different obstacles affect or prevent it in practice. To address this predicament we developed...... an interactive compilation feedback system that guides programmers in iteratively modifying their application source code. This helps leverage the compiler’s ability to generate loop-parallel code. We employ our system to modify two sequential benchmarks dealing with image processing and edge detection......, resulting in scalable parallelized code that runs up to 8.3 times faster on an eightcore Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should be combined...

  13. Compilation of Existing Neutron Screen Technology

    Directory of Open Access Journals (Sweden)

    N. Chrysanthopoulou

    2014-01-01

    Full Text Available The presence of fast neutron spectra in new reactors is expected to induce a strong impact on the contained materials, including structural materials, nuclear fuels, neutron reflecting materials, and tritium breeding materials. Therefore, introduction of these reactors into operation will require extensive testing of their components, which must be performed under neutronic conditions representative of those expected to prevail inside the reactor cores when in operation. Due to limited availability of fast reactors, testing of future reactor materials will mostly take place in water cooled material test reactors (MTRs by tailoring the neutron spectrum via neutron screens. The latter rely on the utilization of materials capable of absorbing neutrons at specific energy. A large but fragmented experience is available on that topic. In this work a comprehensive compilation of the existing neutron screen technology is attempted, focusing on neutron screens developed in order to locally enhance the fast over thermal neutron flux ratio in a reactor core.

  14. Programming cells: towards an automated 'Genetic Compiler'.

    Science.gov (United States)

    Clancy, Kevin; Voigt, Christopher A

    2010-08-01

    One of the visions of synthetic biology is to be able to program cells using a language that is similar to that used to program computers or robotics. For large genetic programs, keeping track of the DNA on the level of nucleotides becomes tedious and error prone, requiring a new generation of computer-aided design (CAD) software. To push the size of projects, it is important to abstract the designer from the process of part selection and optimization. The vision is to specify genetic programs in a higher-level language, which a genetic compiler could automatically convert into a DNA sequence. Steps towards this goal include: defining the semantics of the higher-level language, algorithms to select and assemble parts, and biophysical methods to link DNA sequence to function. These will be coupled to graphic design interfaces and simulation packages to aid in the prediction of program dynamics, optimize genes, and scan projects for errors.

  15. abc: An extensible AspectJ compiler

    DEFF Research Database (Denmark)

    Avgustinov, Pavel; Christensen, Aske Simon; Hendren, Laurie

    2005-01-01

    Research in the design of aspect-oriented programming languages requires a workbench that facilitates easy experimentation with new language features and implementation techniques. In particular, new features for AspectJ have been proposed that require extensions in many dimensions: syntax, type...... checking and code generation, as well as data flow and control flow analyses. The AspectBench Compiler (abc) is an implementation of such a workbench. The base version of abc implements the full AspectJ language. Its frontend is built, using the Polyglot framework, as a modular extension of the Java...... language. The use of Polyglot gives flexibility of syntax and type checking. The backend is built using the Soot framework, to give modular code generation and analyses. In this paper, we outline the design of abc, focusing mostly on how the design supports extensibility. We then provide a general overview...

  16. Trends and determinants for early initiation of and exclusive breastfeeding under six months in Vietnam: results from the Multiple Indicator Cluster Surveys, 2000–2011

    Directory of Open Access Journals (Sweden)

    Quyen Thi-Tu Bui

    2016-02-01

    Full Text Available Background: There is strong evidence that breastfeeding (BF significantly benefits mothers and infants in various ways. Yet the proportion of breastfed babies in Vietnam is low and continues to decline. This study fills an important evidence gap in BF practices in Vietnam. Objective: This paper examines the trend of early initiation of BF and exclusive BF from 2000 to 2011 in Vietnam and explores the determinants at individual and contextual levels. Design: Data from three waves of the Multiple Indicator Cluster Survey were combined to estimate crude and adjusted trends over time for two outcomes – early initiation of BF and exclusive BF. Three-level logistic regressions were fitted to examine the impacts of both individual and contextual characteristics on early initiation of BF and exclusive BF in the 2011 data. Results: Both types of BF showed a decreasing trend over time after controlling for individual-level characteristics but this trend was more evident for early initiation of BF. Apart from child's age, individual-level characteristics were not significant predictors of the BF outcomes, but provincial characteristics had a strong association. When controlling for individual-level characteristics, mothers living in provinces with a higher percentage of mothers with more than three children were more likely to have initiated early BF (odds ratio [OR]: 1.06; confidence interval [CI]: 1.02–1.11 but less likely to exclusively breastfeed their babies (OR: 0.94; CI: 0.88–1.01. Mothers living in areas with a higher poverty rate were more likely to breastfeed exclusively (OR: 1.07; CI: 1.02–1.13, and those who delivered by Caesarean section were less likely to initiate early BF. Conclusions: Our results suggest that environmental factors are becoming more important for determining BF practices in Vietnam. Intervention programs should therefore not only consider individual factors, but should also consider the potential impact of

  17. Continuous measurements of SiF4 and SO2 by thermal emission spectroscopy: Insight from a 6-month survey at the Popocatépetl volcano

    Science.gov (United States)

    Taquet, N.; Meza Hernández, I.; Stremme, W.; Bezanilla, A.; Grutter, M.; Campion, R.; Palm, M.; Boulesteix, T.

    2017-07-01

    The processes linked with the emplacement and growth/destruction of a lava dome are of prime importance to understand the stability of such extrusions and assess the associated risks for local populations. During the last couple of decades, ground and space-based spectroscopic techniques have been developed to monitor such processes from a safe distance. Such approaches significantly improved our knowledge about the relationship between the chemical composition of the volcanic gas plumes and both the deep and shallow volcanic processes leading to the different types of explosive activity. The potential of the ground-based thermal emission Fourier Transform Infrared spectroscopy (FTIR) remained under-exploited due to the difficulty to properly handle the radiative-transfer phenomena. Despite the drawbacks in the complex analytical requirements, this method enables to continuously monitor (day and night) with a high temporal resolution (1 meas/3 min), relevant gas species such as SO2 and SiF4 in the volcanic plumes. Previous studies have related the temporal variations of the SiF4/SO2 ratio in volcanic plumes to the onset of vulcanian explosions. This study reports a 6-month SO2, SiF4, and SiF4/SO2 time series (from January to June 2015) of the Popocatepetl's gas plume obtained from FTIR thermal emission spectroscopic measurements. The infrared spectra were analyzed using the SFIT4 radiative transfer and inverse model, which we have adapted for this application. We obtained highly variable SiF4/SO2 ratios with a mean value of 3.6 × 10- 4, with the highest values (around 3 × 10- 3) measured during the final phase of a lava dome growth (February-March 2015). The rapid SiF4/SO2 variations were more carefully explored and compared for the first time with the seismic activity. A remarkable coincidence between sharp SiF4/SO2 rises and the seismic events are evidenced here.

  18. Compton Thick AGN in the 70 Month Swift-BAT All-Sky Hard X-ray Survey: a Bayesian approach

    Science.gov (United States)

    Georgantopoulos, I.; Akylas, A.; Ranalli, P.; Corral, A.; Lanzuisi, G.

    2016-08-01

    The 70 month Swift/BAT catalogue provides a sensitive view of the extragalactic X-ray sky at hard energies 14-195 keV containing about 800 Active Galactic Nuclei. We explore its content in heavily obscured Compton-thick AGN by combining the BAT (14-195 keV) with the XRT data (0.3-10 keV) at lower energies. We apply a Bayesian methodology using Markov chains to estimate the exact probability distribution of the column density. We find 54 possible Compton-thick sources (from 3 to 100 % probability) translating to a 7% fraction of the total AGN population. We derive an accurate Compton-thick number count distribution taking into account the exact probability of a source being Compton-thick as well as the flux errors. The number density of Compton-thick AGN is critical for the calibration of X-ray background synthesis models. We find that the number count distribution agrees with models that adopt a low intrinsic fraction of Compton-thick AGN (15%) among the total AGN population and a reflected emission of (~5%). Finally, we derive the first parametric luminosity function of Compton-thick AGN in the local universe. The unabsorbed luminosity function can be represented by a double power-law with a break at L* ~2 x 10^42 ergs in the 20-40 keV band. The Compton-thick AGN constitute a substantial fraction of the AGN density at low luminosities (<10^42 erg/s).

  19. Compton-thick AGN in the 70-month Swift-BAT All-Sky Hard X-ray Survey: A Bayesian approach

    Science.gov (United States)

    Akylas, A.; Georgantopoulos, I.; Ranalli, P.; Gkiokas, E.; Corral, A.; Lanzuisi, G.

    2016-10-01

    The 70-month Swift-BAT catalogue provides a sensitive view of the extragalactic X-ray sky at hard energies (>10 keV) containing about 800 active galactic nuclei (AGN). We explore its content in heavily obscured, Compton-thick AGN by combining the BAT (14-195 keV) with the lower energy XRT (0.3-10 keV) data. We apply a Bayesian methodology using Markov chains to estimate the exact probability distribution of the column density for each source. We find 53 possible Compton-thick sources (probability range 3-100%) translating to a ~7% fraction of the AGN in our sample. We derive the first parametric luminosity function of Compton-thick AGN. The unabsorbed luminosity function can be represented by a double power law with a break at L⋆ ~ 2 × 1042erg s-1 in the 20-40 keV band. The Compton-thick AGN contribute ~17% of the total AGN emissivity. We derive an accurate Compton-thick number count distribution taking into account the exact probability of a source being Compton-thick and the flux uncertainties. This number count distribution is critical for the calibration of the X-ray background synthesis models, i.e. for constraining the intrinsic fraction of Compton-thick AGN. We find that the number counts distribution in the 14-195 keV band agrees well with our models which adopt a low intrinsic fraction of Compton-thick AGN (~ 12%) among the total AGN population and a reflected emission of ~ 5%. In the extreme case of zero reflection, the number counts can be modelled with a fraction of at most 30% Compton-thick AGN of the total AGN population and no reflection. Moreover, we compare our X-ray background synthesis models with the number counts in the softer 2-10 keV band. This band is more sensitive to the reflected component and thus helps us to break the degeneracy between the fraction of Compton-thick AGN and the reflection emission. The number counts in the 2-10 keV band are well above the models which assume a 30% Compton-thick AGN fraction and zero reflection, while

  20. Alaska Geochemical Database Version 2.0 (AGDB2) - Including "Best Value" Data Compilations for Geochemical Data for Rock, Sediment, Soil, Mineral, and Concentrate Sample Media

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Alaska Geochemical Database Version 2.0 (AGDB2) contains new geochemical data compilations in which each geologic material sample has one "best value"...

  1. Continuation-Passing C, compiling threads to events through continuations

    CERN Document Server

    Kerneis, Gabriel

    2010-01-01

    In this paper, we introduce Continuation Passing C (CPC), a programming language for concurrent systems in which native and cooperative threads are unified and presented to the programmer as a single abstraction. The CPC compiler uses a compilation technique, based on the CPS transform, that yields efficient code and an extremely lightweight representation for contexts. We provide a complete proof of the correctness of our compilation scheme. We show in particular that lambda-lifting, a common compilation technique for functional languages, is also correct in an imperative language like C, under some conditions enforced by the CPC compiler. The current CPC compiler is mature enough to write substantial programs such as Hekate, a highly concurrent BitTorrent seeder. Our benchmark results show that CPC is as efficient, while significantly cheaper, as the most efficient thread libraries available.

  2. Design and Implementation of Java Just-in-Time Compiler

    Institute of Scientific and Technical Information of China (English)

    丁宇新; 梅嘉; 程虎

    2000-01-01

    Early Java implementations relied on interpretation, leading to poor performance compared to compiled programs. Java just-in-time (JIT) compiler can compile Java programs at runtime, so it not only improves Java's performance prominently, but also preserves Java's portability. In this paper the design and implementing techniques of Java JIT compiler based on Chinese open system are discussed in detail. To enhance the portability, a translating method which combines the static simulating method and macro expansion method is adopted. The optimization technique for JIT compiler is also discussed and a way to evaluate the hotspots in Java programs is presented. Experiments have been conducted to verify JIT compilation technique as an efficient way to accelerate Java.

  3. Geothermal water and gas: collected methods for sampling and analysis. Comment issue. [Compilation of methods

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, J.G.; Serne, R.J.; Shannon, D.W.; Woodruff, E.M.

    1976-08-01

    A collection of methods for sampling and analysis of geothermal fluids and gases is presented. Compilations of analytic options for constituents in water and gases are given. Also, a survey of published methods of laboratory water analysis is included. It is stated that no recommendation of the applicability of the methods to geothermal brines should be assumed since the intent of the table is to encourage and solicit comments and discussion leading to recommended analytical procedures for geothermal waters and research. (WHK)

  4. Computer-assisted methods for the construction, compilation and display of geoscientific maps

    Science.gov (United States)

    Gabert, Gottfried

    The paper reviews modern methods for map construction, compilation and display on the basis of current applications at the Geological Surveys of the Federal Republic of Germany and Lower Saxony. The graphical representation of geoscientific data, for example mapping and exploration results, is generally done in the traditional way of analog maps. Different possibilities to produce digital maps exist: map construction directly from geological field data, digitization of existing maps, especially manuscript maps, conversion of remotely sensed data into raster or vector maps.

  5. HOPE: Just-in-time Python compiler for astrophysical computations

    Science.gov (United States)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  6. HAL/S-360 compiler test activity report

    Science.gov (United States)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  7. Performance of Compiler-Assisted Memory Safety Checking

    Science.gov (United States)

    2014-08-01

    Performance of Compiler -Assisted Memory Safety Checking David Keaton Robert C. Seacord August 2014 TECHNICAL NOTE CMU/SEI-2014-TN...014 | vii Abstract Buffer overflows affect a large installed base of C code. This technical note describes the criteria for deploying a compiler ...describes a modification to the LLVM compiler to enable hoisting bounds checks from loops and functions. This proof-of-concept prototype has been used

  8. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    Science.gov (United States)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  9. Compiler Construction Using Java, JavaCC, and Yacc

    CERN Document Server

    Dos Reis, Anthony J

    2012-01-01

    Broad in scope, involving theory, the application of that theory, and programming technology, compiler construction is a moving target, with constant advances in compiler technology taking place. Today, a renewed focus on do-it-yourself programming makes a quality textbook on compilers, that both students and instructors will enjoy using, of even more vital importance. This book covers every topic essential to learning compilers from the ground up and is accompanied by a powerful and flexible software package for evaluating projects, as well as several tutorials, well-defined projects, and tes

  10. Automatic Parallelization An Overview of Fundamental Compiler Techniques

    CERN Document Server

    Midkiff, Samuel P

    2012-01-01

    Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and

  11. DisBlue+: A distributed annotation-based C# compiler

    Directory of Open Access Journals (Sweden)

    Samir E. AbdelRahman

    2010-06-01

    Full Text Available Many programming languages utilize annotations to add useful information to the program but they still result in more tokens to be compiled and hence slower compilation time. Any current distributed compiler breaks the program into scattered disjoint pieces to speed up the compilation. However, these pieces cooperate synchronously and depend highly on each other. This causes massive overhead since messages, symbols, or codes must be roamed throughout the network. This paper presents two promising compilers named annotation-based C# (Blue+ and distributed annotation-based C# (DisBlue+. The proposed Blue+ annotation is based on axiomatic semantics to replace the if/loop constructs. As the developer tends to use many (complex conditions and repeat them in the program, such annotations reduce the compilation scanning time and increases the whole code readability. Built on the top of Blue+, DisBlue+ presents its proposed distributed concept which is to divide each program class to its prototype and definition, as disjoint distributed pieces, such that each class definition is compiled with only its related compiled prototypes (interfaces. Such concept reduces the amount of code transferred over the network, minimizes the dependencies among the disjoint pieces, and removes any possible synchronization between them. To test their efficiencies, Blue+ and DisBlue+ were verified with large-size codes against some existing compilers namely Javac, DJavac, and CDjava.

  12. Atlantic Protected Species Assessment Aerial Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data sets include a compilation of aerial line-transect surveys conducted over continental shelf waters of the southeastern U.S. Surveys have been conducted...

  13. Caribbean Marine Mammal Assessment Vessel Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data sets are a compilation of large vessel surveys for marine mammal stock assessments in Caribbean waters conducted during 2000-2001. These surveys were...

  14. Compilation of data for radionuclide transport analysis

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-11-01

    This report is one of the supporting documents to the updated safety assessment (project SAFE) of the Swedish repository for low and intermediate level waste, SFR 1. A number of calculation cases for quantitative analysis of radionuclide release and dose to man are defined based on the expected evolution of the repository, geosphere and biosphere in the Base Scenario and other scenarios selected. The data required by the selected near field, geosphere and biosphere models are given and the values selected for the calculations are compiled in tables. The main sources for the selected values of the migration parameters in the repository and geosphere models are the safety assessment of a deep repository for spent fuel, SR 97, and the preliminary safety assessment of a repository for long-lived, low- and intermediate level waste, SFL 3-5. For the biosphere models, both site-specific data and generic values of the parameters are selected. The applicability of the selected parameter values is discussed and the uncertainty is qualitatively addressed for data to the repository and geosphere migration models. Parameter values selected for these models are in general pessimistic in order not to underestimate the radionuclide release rates. It is judged that this approach combined with the selected calculation cases will illustrate the effects of uncertainties in processes and events that affects the evolution of the system as well as in quantitative data that describes this. The biosphere model allows for probabilistic calculations and the uncertainty in input data are quantified by giving minimum, maximum and mean values as well as the type of probability distribution function.

  15. Spatial Compilation of Holocene Volcanic Vents in the Western Conterminous United States

    Science.gov (United States)

    Ramsey, D. W.; Siebert, L.

    2015-12-01

    A spatial compilation of all known Holocene volcanic vents in the western conterminous United States has been assembled. This compilation records volcanic vent location (latitude/longitude coordinates), vent type (cinder cone, dome, etc.), geologic map unit description, rock type, age, numeric age and reference (if dated), geographic feature name, mapping source, and, where available, spatial database source. Primary data sources include: USGS geologic maps, USGS Data Series, the Smithsonian Global Volcanism Program (GVP) catalog, and published journal articles. A total of 726 volcanic vents have been identified from 45 volcanoes or volcanic fields spanning ten states. These vents are found along the length of the Cascade arc in the Pacific Northwest, widely around the Basin and Range province, and at the southern margin of the Colorado Plateau into New Mexico. The U.S. Geological Survey (USGS) National Volcano Early Warning System (NVEWS) identifies 28 volcanoes and volcanic centers in the western conterminous U.S. that pose moderate, high, or very high threats to surrounding communities based on their recent eruptive histories and their proximity to vulnerable people, property, and infrastructure. This compilation enhances the understanding of volcano hazards that could threaten people and property by providing the context of where Holocene eruptions have occurred and where future eruptions may occur. Locations in this compilation can be spatially compared to located earthquakes, used as generation points for numerical hazard models or hazard zonation buffering, and analyzed for recent trends in regional volcanism and localized eruptive activity.

  16. Pick'n'Fix: Capturing Control Flow in Modular Compilers

    DEFF Research Database (Denmark)

    Day, Laurence E.; Bahr, Patrick

    2014-01-01

    We present a modular framework for implementing languages with effects and control structures such as loops and conditionals. This framework enables modular definitions of both syntax and semantics as well as modular implementations of compilers and virtual machines. In order to compile control s...

  17. abc: The AspectBench Compiler for AspectJ

    DEFF Research Database (Denmark)

    Allan, Chris; Avgustinov, Pavel; Christensen, Aske Simon;

    2005-01-01

    abc is an extensible, optimising compiler for AspectJ. It has been designed as a workbench for experimental research in aspect-oriented programming languages and compilers. We outline a programme of research in these areas, and we review how abc can help in achieving those research goals...

  18. NUAPC:A Parallelizing Compiler for C++

    Institute of Scientific and Technical Information of China (English)

    朱根江; 谢立; 等

    1997-01-01

    is paper presents a model for automatically parallelizing compiler based on C++ which consists of compile-time and run-time parallelizing facilities.The paper also describes a method for finding both intra-object and inter-object parallelism.The parallelism detection is completely transparent to users.

  19. 38 CFR 45.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Semi-annual compilation. 45.600 Section 45.600 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) NEW RESTRICTIONS ON LOBBYING Agency Reports § 45.600 Semi-annual compilation. (a) The head...

  20. Production compilation : A simple mechanism to model complex skill acquisition

    NARCIS (Netherlands)

    Taatgen, N.A.; Lee, F.J.

    2003-01-01

    In this article we describe production compilation, a mechanism for modeling skill acquisition. Production compilation has been developed within the ACT-Rational (ACT-R; J. R. Anderson, D. Bothell, M. D. Byrne, & C. Lebiere, 2002) cognitive architecture and consists of combining and specializing tas

  1. Global compilation of marine varve records

    Science.gov (United States)

    Schimmelmann, Arndt; Lange, Carina B.; Schieber, Juergen; Francus, Pierre; Ojala, Antti E. K.; Zolitschka, Bernd

    2017-04-01

    Marine varves contain highly resolved records of geochemical and other paleoceanographic and paleoenvironmental proxies with annual to seasonal resolution. We present a global compilation of marine varved sedimentary records throughout the Holocene and Quaternary covering more than 50 sites worldwide. Marine varve deposition and preservation typically depend on environmental and sedimentological conditions, such as a sufficiently high sedimentation rate, severe depletion of dissolved oxygen in bottom water to exclude bioturbation by macrobenthos, and a seasonally varying sedimentary input to yield a recognizable rhythmic varve pattern. Additional oceanographic factors may include the strength and depth range of the Oxygen Minimum Zone (OMZ) and regional anthropogenic eutrophication. Modern to Quaternary marine varves are not only found in those parts of the open ocean that comply with these conditions, but also in fjords, embayments and estuaries with thermohaline density stratification, and nearshore 'marine lakes' with strong hydrologic connections to ocean water. Marine varves have also been postulated in pre-Quaternary rocks. In the case of non-evaporitic laminations in fine-grained ancient marine rocks, such as banded iron formations and black shales, laminations may not be varves but instead may have multiple alternative origins such as event beds or formation via bottom currents that transported and sorted silt-sized particles, clay floccules, and organic-mineral aggregates in the form of migrating bedload ripples. Modern marine ecosystems on continental shelves and slopes, in coastal zones and in estuaries are susceptible to stress by anthropogenic pressures, for example in the form of eutrophication, enhanced OMZs, and expanding ranges of oxygen-depletion in bottom waters. Sensitive laminated sites may play the important role of a 'canary in the coal mine' where monitoring the character and geographical extent of laminations/varves serves as a diagnostic

  2. Experiences with Compiler Support for Processors with Exposed Pipelines

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Schleuniger, Pascal; Hindborg, Andreas Erik;

    2015-01-01

    Field programmable gate arrays, FPGAs, have become an attractive implementation technology for a broad range of computing systems. We recently proposed a processor architecture, Tinuso, which achieves high performance by moving complexity from hardware to the compiler tool chain. This means...... that the compiler tool chain must handle the increased complexity. However, it is not clear if current production compilers can successfully meet the strict constraints on instruction order and generate efficient object code. In this paper, we present our experiences developing a compiler backend using the GNU...... Compiler Collection, GCC. For a set of C benchmarks, we show that a Tinuso implementation with our GCC backend reaches a relative speedup of up to 1.73 over a similar Xilinx Micro Blaze configuration while using 30% fewer hardware resources. While our experiences are generally positive, we expose some...

  3. Code Commentary and Automatic Refactorings using Feedback from Multiple Compilers

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Probst, Christian W.; Karlsson, Sven

    2014-01-01

    Optimizing compilers are essential to the performance of parallel programs on multi-core systems. It is attractive to expose parallelism to the compiler letting it do the heavy lifting. Unfortunately, it is hard to write code that compilers are able to optimize aggressively and therefore tools...... exist that can guide programmers with refactorings allowing the compilers to optimize more aggressively. We target the problem with many false positives that these tools often generate, where the amount of feedback can be overwhelming for the programmer. Our approach is to use a filtering scheme based...... on feedback from multiple compilers and show how we are able to filter out 87.6% of the comments by only showing the most promising comments....

  4. Fully Countering Trusting Trust through Diverse Double-Compiling

    CERN Document Server

    Wheeler, David A

    2010-01-01

    An Air Force evaluation of Multics, and Ken Thompson's Turing award lecture ("Reflections on Trusting Trust"), showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this "trusting trust" attack goes undetected, even complete analysis of a system's source code will not find the malicious code that is running. Previously-known countermeasures have been grossly inadequate. If this attack cannot be countered, attackers can quietly subvert entire classes of computer systems, gaining complete control over financial, infrastructure, military, and/or business systems worldwide. This dissertation's thesis is that the trusting trust attack can be detected and effectively countered using the "Diverse Double-Compiling" (DDC) technique, as demonstrated by (1) a formal proof that DDC can determine if source code and generated executable code correspond, (2) a demonstration of DDC with four compilers (a small C compiler, a small Lisp compiler, a small malic...

  5. FY 1998 annual report on the compilation of database of experts for development of welfare equipment. Surveys for collecting information of institutes supporting, e.g., welfare equipment research and development; 1998 nendo fukushi yogu no kaihatsu ni kakawaru senmonteki chiken wo yushita jinzai database no kochiku chosa hokokusho. Fukushi yogu kenkyu kaihatsu nado shien kikan joho seibi chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    In order to promote supply of high-quality, inexpensive welfare equipment, the data about public testing and research institutes throughout Japan are collected for their organizations, research themes, expertized areas, relations with welfare equipment, future activity guidelines and the like, and expert employees, so that enterprises can have adequate advises from them, when they plan to start researches on and development of related equipment. Information was obtained by means of questionnaires returned back from a total of 127 institutes related to industrial testing and researches, and 61 institutes, e.g., rehabilitation centers, deeply related to welfare equipment. This reports compiles information on 48 institutes which are conducting researches or the like on welfare equipment. The compiled lists of the local institutes supporting welfare equipment research and development activities describe, e.g., their total expenses, numbers of technical experts, conditions of windows responsible for welfare equipment or the like, percentages of expenses related to welfare equipment on total expenses, percentages of numbers of technical experts responsible for welfare equipment on total numbers of technical experts, and major research themes on welfare equipment. (NEDO)

  6. FY 1998 annual report on the compilation of database of experts for development of welfare equipment. Surveys for collecting information of institutes supporting, e.g., welfare equipment research and development; 1998 nendo fukushi yogu no kaihatsu ni kakawaru senmonteki chiken wo yushita jinzai database no kochiku chosa hokokusho. Fukushi yogu kenkyu kaihatsu nado shien kikan joho seibi chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    In order to promote supply of high-quality, inexpensive welfare equipment, the data about public testing and research institutes throughout Japan are collected for their organizations, research themes, expertized areas, relations with welfare equipment, future activity guidelines and the like, and expert employees, so that enterprises can have adequate advises from them, when they plan to start researches on and development of related equipment. Information was obtained by means of questionnaires returned back from a total of 127 institutes related to industrial testing and researches, and 61 institutes, e.g., rehabilitation centers, deeply related to welfare equipment. This reports compiles information on 48 institutes which are conducting researches or the like on welfare equipment. The compiled lists of the local institutes supporting welfare equipment research and development activities describe, e.g., their total expenses, numbers of technical experts, conditions of windows responsible for welfare equipment or the like, percentages of expenses related to welfare equipment on total expenses, percentages of numbers of technical experts responsible for welfare equipment on total numbers of technical experts, and major research themes on welfare equipment. (NEDO)

  7. Compiling an OPEC Word List: A Corpus-Informed Lexical Analysis

    Directory of Open Access Journals (Sweden)

    Ebtisam Saleh Aluthman

    2017-01-01

    Full Text Available The present study is conducted within the borders of lexicographic research, where corpora have increasingly become all-pervasive. The overall goal of this study is to compile an open-source OPEC[1] Word List (OWL that is available for lexicographic research and vocabulary learning related to English language learning for the purpose of oil marketing and oil industries. To achieve this goal, an OPEC Monthly Reports Corpus (OMRC comprising of 1,004,542 words was compiled. The OMRC consists of 40 OPEC monthly reports released between 2003 and 2015. Consideration was given to both range and frequency criteria when compiling the OWL which consists of 255 word types. Along with this basic goal, this study aims to investigate the coverage of the most well-recognised word lists, the General Service List of English Words (GSL (West ,1953  and  the Academic Word List (AWL (Coxhead, 2000 in the OMRC corpus. The 255 word types included in the OWL are not overlapping with either the AWL or the GSL. Results suggest the necessity of making this discipline-specific word list for ESL students of oil marketing industries. The availability of the OWL has significant pedagogical contributions to curriculum design, learning activities and the overall process of vocabulary learning in the context of teaching English for specific purposes (ESP. Keywords: Vocabulary Profiling- Vocabulary Learning- Word List- OPEC- ESP OPEC stands for Organisation of Petroleum Exporting Countries.

  8. Compilation of kinetic data for geochemical calculations

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, R.C. [Monitor Scientific, LLC., Denver, Colorado (United States); Savage, D. [Quintessa, Ltd., Nottingham (United Kingdom); Sasamoto, Hiroshi; Shibata, Masahiro; Yui, Mikazu [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan). Tokai Works

    2000-01-01

    Kinetic data, including rate constants, reaction orders and activation energies, are compiled for 34 hydrolysis reactions involving feldspars, sheet silicates, zeolites, oxides, pyroxenes and amphiboles, and for similar reactions involving calcite and pyrite. The data are compatible with a rate law consistent with surface reaction control and transition-state theory, which is incorporated in the geochemical software package EQ3/6 and GWB. Kinetic data for the reactions noted above are strictly compatible with the transition-state rate law only under far-from-equilibrium conditions. It is possible that the data are conceptually consistent with this rate law under both far-from-equilibrium and near-to-equilibrium conditions, but this should be confirmed whenever possible through analysis of original experimental results. Due to limitations in the availability of kinetic data for mine-water reactions, and in order to simplify evaluations of geochemical models of groundwater evolution, it is convenient to assume local-equilibrium in such models whenever possible. To assess whether this assumption is reasonable, a modeling approach accounting for couple fluid flow and water-rock interaction is described that can be use to estimate spatial and temporal scale of local equilibrium. The approach is demonstrated for conditions involving groundwater flow in fractures at JNC's Kamaishi in-situ tests site, and is also used to estimate the travel time necessary for oxidizing surface waters to migrate to the level of a HLW repository in crystalline rock. The question of whether local equilibrium is a reasonable assumption must be addressed using an appropriate modeling approach. To be appropriate for conditions at the Kamaishi site using the modeling approach noted above, the fracture fill must closely approximate a porous mine, groundwater flow must be purely advective and diffusion of solutes across the fracture-host rock boundary must not occur. Moreover, the

  9. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  10. Twelve-month prevalence, comorbidity and correlates of mental disorders in Germany: the Mental Health Module of the German Health Interview and Examination Survey for Adults (DEGS1-MH).

    Science.gov (United States)

    Jacobi, Frank; Höfler, Michael; Siegert, Jens; Mack, Simon; Gerschler, Anja; Scholl, Lucie; Busch, Markus A; Hapke, Ulfert; Maske, Ulrike; Seiffert, Ingeburg; Gaebel, Wolfgang; Maier, Wolfgang; Wagner, Michael; Zielasek, Jürgen; Wittchen, Hans-Ulrich

    2014-09-01

    This paper provides up to date prevalence estimates of mental disorders in Germany derived from a national survey (German Health Interview and Examination Survey for Adults, Mental Health Module [DEGS1-MH]). A nationally representative sample (N = 5318) of the adult (18-79) population was examined by clinically trained interviewers with a modified version of the Composite International Diagnostic Interview (DEGS-CIDI) to assess symptoms, syndromes and diagnoses according to DSM-IV-TR (25 diagnoses covered). Of the participants 27.7% met criteria for at least one mental disorder during the past 12 months, among them 44% with more than one disorder and 22% with three or more diagnoses. Most frequent were anxiety (15.3%), mood (9.3%) and substance use disorders (5.7%). Overall rates for mental disorders were substantially higher in women (33% versus 22% in men), younger age group (18-34: 37% versus 20% in age group 65-79), when living without a partner (37% versus 26% with partnership) or with low (38%) versus high socio-economic status (22%). High degree of urbanization (> 500,000 inhabitants versus < 20,000) was associated with elevated rates of psychotic (5.2% versus 2.5%) and mood disorders (13.9% versus 7.8%). The findings confirm that almost one third of the general population is affected by mental disorders and inform about subsets in the population who are particularly affected. Copyright © 2014 John Wiley & Sons, Ltd.

  11. A Compiler for CPPNs: Transforming Phenotypic Descriptions Into Genotypic Representations

    DEFF Research Database (Denmark)

    Risi, Sebastian

    2013-01-01

    , the question of how to start evolution from a promising part of the search space becomes more and more important. To address this challenge, we introduce the concept of a CPPN-Compiler, which allows the user to directly compile a high-level description of the desired starting structure into the CPPN itself......-specific regularities like symmetry or repetition. Thus the results presented in this paper open up a new research direction in GDS, in which specialized CPPN-Compilers for different domains could help to overcome the black box of evolutionary optimization....

  12. Efficient Compilation of a Class of Variational Forms

    CERN Document Server

    Kirby, Robert C

    2012-01-01

    We investigate the compilation of general multilinear variational forms over affines simplices and prove a representation theorem for the representation of the element tensor (element stiffness matrix) as the contraction of a constant reference tensor and a geometry tensor that accounts for geometry and variable coefficients. Based on this representation theorem, we design an algorithm for efficient pretabulation of the reference tensor. The new algorithm has been implemented in the FEniCS Form Compiler (FFC) and improves on a previous loop-based implementation by several orders of magnitude, thus shortening compile-times and development cycles for users of FFC.

  13. Compiler Optimization: A Case for the Transformation Tool Contest

    Directory of Open Access Journals (Sweden)

    Sebastian Buchwald

    2011-11-01

    Full Text Available An optimizing compiler consists of a front end parsing a textual programming language into an intermediate representation (IR, a middle end performing optimizations on the IR, and a back end lowering the IR to a target representation (TR built of operations supported by the target hardware. In modern compiler construction graph-based IRs are employed. Optimization and lowering tasks can then be implemented with graph transformation rules. This case provides two compiler tasks to evaluate the participating tools regarding performance.

  14. Writing Compilers and Interpreters A Software Engineering Approach

    CERN Document Server

    Mak, Ronald

    2011-01-01

    Long-awaited revision to a unique guide that covers both compilers and interpreters Revised, updated, and now focusing on Java instead of C++, this long-awaited, latest edition of this popular book teaches programmers and software engineering students how to write compilers and interpreters using Java. You?ll write compilers and interpreters as case studies, generating general assembly code for a Java Virtual Machine that takes advantage of the Java Collections Framework to shorten and simplify the code. In addition, coverage includes Java Collections Framework, UML modeling, object-oriented p

  15. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  16. The changing importance of key factors associated with anaemia in 6- to 59-month-old children in a sub-Saharan African setting where malaria is on the decline: analysis of the Rwanda Demographic and Health Survey 2010.

    Science.gov (United States)

    Nkulikiyinka, Richard; Binagwaho, Agnes; Palmer, Katie

    2015-12-01

    To estimate the relative contribution of malaria and other potential determinants to current anaemia prevalence in Rwanda. The database for this study was the Rwanda Demographic and Health Survey 2010. Haemoglobin and malaria test results, and additional exposures ascertained through mothers' interviews, were analysed for all eligible children age 6-59 months (n = 4068), in addition to diet data available for the youngest under 5-year-old per household. We examined anaemia-exposure associations through forward logistic regression, first for the overall population (n = 3685), and second, for the subpopulation with diet data (n = 1934). In the overall study population, malaria was strongly associated with anaemia (OR = 6.83, 95% CI: 2.90-16.05), but population impact was modest (population-attributable fraction = 2.5%). Factors associated with lower odds of anaemia were recent de-worming medication (six months; OR = 0.60, 95% CI: 0.49-0.74), female sex (OR = 0.76, 95% CI: 0.66-0.87), increasing age, residence in North Province and educated mother. Being underweight and recent fever (two weeks) were associated with higher odds. In the subpopulation with diet data, odds were lower with consumption of vitamin A-rich foods (OR = 0.66, 95% CI: 0.50-0.88); and higher in households with many young children. Malaria remains a strong determinant of anaemia for the individual child: transmission control efforts must be maintained. At population level, to further reduce anaemia prevalence, promoting regular vitamin A intake from natural sources and reducing intestinal helminths burden appear the most promising strategies to explore; exploring potential hitherto unidentified sex-linked factors is warranted. © 2015 John Wiley & Sons Ltd.

  17. Aeromagnetic map compilation: procedures for merging and an example from Washington

    Directory of Open Access Journals (Sweden)

    C. Finn

    2000-06-01

    Full Text Available Rocks in Antarctica and offshore have widely diverse magnetic properties. Consequently, aeromagnetic data collected there can improve knowledge of the geologic, tectonic and geothermal characteristics of the region. Aeromagnetic data can map concealed structures such as faults, folds and dikes, ascertain basin thickness and locate buried volcanic, as well as some intrusive and metamorphic rocks. Gridded, composite data sets allow a view of continental-scale trends that individual data sets do not provide and link widely-separated areas of outcrop and disparate geologic studies. Individual magnetic surveys must be processed so that they match adjacent surveys prior to merging. A consistent representation of the Earth's magnetic field (International Geomagnetic Reference Field (IGRF must be removed from each data set. All data sets need to be analytically continued to the same flight elevation with their datums shifted to match adjacent data. I advocate minimal processing to best represent the individual surveys in the merged compilation. An example of a compilation of aeromagnetic surveys from Washington illustrates the utility of aeromagnetic maps for providing synoptic views of regional tectonic features.

  18. Monthly Meteorological Reports

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Monthly forms that do not fit into any regular submission. Tabulation sheets and generic monthly forms designed to capture miscellaneous monthly observations.

  19. Electric power monthly, April 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-07

    The Electric Power Monthly is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the US, Census division, and State levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions.

  20. Electric power monthly, May 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-25

    The Electric Power Monthly (EPM) is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the US, Census division, and State levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions.

  1. Alaska NWRS Legacy Seabird Monitoring Data Inventory and Compilation

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The objective of this project is to compile and standardize data from the Alaska Peninsula/Becharof, Kodiak, Togiak, and Yukon Delta National Wildlife Refuges. This...

  2. Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive comptutations...

  3. Compilation and Synthesis for Fault-Tolerant Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela

    of electrodes to perform operations such as dispensing, transport, mixing, split, dilution and detection. Researchers have proposed compilation approaches, which, starting from a biochemical application and a biochip architecture, determine the allocation, resource binding, scheduling, placement and routing...

  4. Digital compilation bedrock geologic map of the Warren quadrangle, Vermont

    Data.gov (United States)

    Vermont Center for Geographic Information — Digital Data from VG95-4A Walsh, GJ, Haydock, S, Prewitt, J, Kraus, J, Lapp, E, O'Loughlin, S, and Stanley, RS, 1995, Digital compilation bedrock geologic map of the...

  5. Digital compilation bedrock geologic map of the Milton quadrangle, Vermont

    Data.gov (United States)

    Vermont Center for Geographic Information — Digital Data from VG95-8A Dorsey, R, Doolan, B, Agnew, PC, Carter, CM, Rosencrantz, EJ, and Stanley, RS, 1995, Digital compilation bedrock geologic map of the Milton...

  6. Digital compilation bedrock geologic map of the Lincoln quadrangle, Vermont

    Data.gov (United States)

    Vermont Center for Geographic Information — Digital Data from VG95-5A Stanley, R, DelloRusso, V, Haydock, S, Lapp, E, O'Loughlin, S, Prewitt, J,and Tauvers, PR, 1995, Digital compilation bedrock geologic map...

  7. Solid state technology: A compilation. [on semiconductor devices

    Science.gov (United States)

    1973-01-01

    A compilation, covering selected solid state devices developed and integrated into systems by NASA to improve performance, is presented. Data are also given on device shielding in hostile radiation environments.

  8. Specification and compilation of real-time stream processing applications

    NARCIS (Netherlands)

    Geuns, Stephanus Joannes

    2015-01-01

    This thesis is concerned with the specification, compilation and corresponding temporal analysis of real-time stream processing applications that are executed on embedded multiprocessor systems. An example of such applications are software defined radio applications. These applications typically hav

  9. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Tripp J. L. (Justin L.); Peterson, K. D. (Kristopher D.); Poznanovic, J. D. (Jeffrey Daniel); Ahrens, C. M. (Christine Marie); Gokhale, M. (Maya)

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  10. Approximate Compilation of Constraints into Multivalued Decision Diagrams

    DEFF Research Database (Denmark)

    Hadzic, Tarik; Hooker, John N.; O’Sullivan, Barry;

    2008-01-01

    We present an incremental refinement algorithm for approximate compilation of constraint satisfaction models into multivalued decision diagrams (MDDs). The algorithm uses a vertex splitting operation that relies on the detection of equivalent paths in the MDD. Although the algorithm is quite...... by replacing the equivalence test with a constraint-specific measure of distance. We demonstrate the value of the approach for approximate and exact MDD compilation and evaluate its benefits in one of the main MDD application domains, interactive configuration....

  11. Compiler writing system detail design specification. Volume 2: Component specification

    Science.gov (United States)

    Arthur, W. J.

    1974-01-01

    The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.

  12. On search guide phrase compilation for recommending home medical products.

    Science.gov (United States)

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  13. Monthly energy review, December 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-21

    This publication presents an overview of EIA`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. An energy preview of alternative fuel providers vehicle fleet surveys is included. The publication is intended for use by members of Congress, Federal and State agencies, energy analysts, and the general public.

  14. Aventis Pasteur vaccines containing inactivated hepatitis A virus: a compilation of immunogenicity data.

    Science.gov (United States)

    Vidor, E; Dumas, R; Porteret, V; Bailleux, F; Veitch, K

    2004-04-01

    Inactivated hepatitis A vaccines were developed in the 1980s and were introduced during the early 1990s. The Aventis Pasteur (AvP) inactivated hepatitis A virus antigen is used in several different vaccine formulations licensed for adults and children. Presented here are the immunogenicity results compiled from 37 clinical trials performed in 20 different countries between 1991 and 2001 in which these vaccines were administered to adults (16 years of age and over), children (aged 12 months-17 years), and infants (younger than 12 months). The accumulated clinical experience with these hepatitis A virus-containing vaccines demonstrates the excellent immunogenicity of this antigen in a wide range of situations. As with other licensed inactivated hepatitis A vaccines, immunological priming is achieved in virtually all vaccinees after a single-dose primary immunization, and it may be reinforced by a booster vaccination administered 6-36 months after the primary vaccination.

  15. Electric power monthly, August 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-13

    The Electric Power Monthly (EPM) presents monthly electricity statistics. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. The EPM is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the US, Census division, and State levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions.

  16. Natural gas monthly, April 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-04-01

    This issue of the Natural Gas Monthly presents the most recent estimates of natural gas data from the Energy Information Administration (EIA). Estimates extend through April 1998 for many data series. The report highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, feature articles are presented designed to assist readers in using and interpreting natural gas information. This issue contains the special report, ``Natural Gas 1997: A Preliminary Summary.`` This report provides information on natural gas supply and disposition for the year 1997, based on monthly data through December from EIA surveys. 6 figs., 28 tabs.

  17. Electric power monthly, September 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-17

    The Electric Power Monthly (EPM) presents monthly electricity statistics. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. The EPM is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the US, Census division, and State levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions.

  18. Retargeting of existing FORTRAN program and development of parallel compilers

    Science.gov (United States)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  19. CAPS OpenACC Compilers: Performance and Portability

    CERN Document Server

    CERN. Geneva

    2013-01-01

    The announcement late 2011 of the new OpenACC directive-based programming standard supported by CAPS, CRAY and PGI compilers has open up the door to more scientific applications that can be ported on many-core systems. Following a porting methodology, this talk will first review the principles of programming with OpenACC and then the advanced features available in the CAPS compilers to further optimize OpenACC applications: library integration, tuning directives with auto-tune mechanisms to build applications adaptive to different GPUs. CAPS compilers use hardware vendors' backends such as NVIDIA CUDA and OpenCL making them the only OpenACC compilers supporting various many-core architectures. About the speaker Stéphane Bihan is co-funder and currently Director of Sales and Marketing at CAPS enterprise. He has held several R&D positions in companies such as ARC international plc in London, Canon Research Center France, ACE compiler experts in Amsterdam and the INRIA r...

  20. And the Survey Says…

    Science.gov (United States)

    White, Susan C.

    2015-01-01

    As we saw last month, over 40% of the students who recently earned bachelor's degrees in physics enter the job market. There are employment opportunities for these graduates in all areas of the economy. When we contact graduates, we ask them where they are working, and we use their responses to compile a list of employers in each state who have…

  1. Monthly Weather Review

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Supplements to the Monthly Weather Review publication. The Weather Bureau published the Monthly weather review Supplement irregularly from 1914 to 1949. The...

  2. Proposing Chinese Pharmacists Month

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    @@ Dear Pharmacists: Today I would like to share with you about the American Pharmacists Month which is celebrated in October every year.This month-long observance is promoted by American Pharmacist Association.

  3. Compilation of current high energy physics experiments - Sept. 1978

    Energy Technology Data Exchange (ETDEWEB)

    Addis, L.; Odian, A.; Row, G. M.; Ward, C. E. W.; Wanderer, P.; Armenteros, R.; Joos, P.; Groves, T. H.; Oyanagi, Y.; Arnison, G. T. J.; Antipov, Yu; Barinov, N.

    1978-09-01

    This compilation of current high-energy physics experiments is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and the nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. Nominally, the compilation includes summaries of all high-energy physics experiments at the above laboratories that were approved (and not subsequently withdrawn) before about June 1978, and had not completed taking of data by 1 January 1975. The experimental summaries are supplemented with three indexes to the compilation, several vocabulary lists giving names or abbreviations used, and a short summary of the beams at each of the laboratories (except Rutherford). The summaries themselves are included on microfiche. (RWR)

  4. A DRAM compiler algorithm for high performance VLSI embedded memories

    Science.gov (United States)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  5. Compilation of current high-energy-physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.

    1980-04-01

    This is the third edition of a compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976.

  6. The necessity of do needs analysis in textbook compilation

    Institute of Scientific and Technical Information of China (English)

    姚茂

    2014-01-01

    <正>Needs analysis plays an important role in textbook compilation.Compile an excellent textbook need to meet a lot of conditions,but the starting point of any textbook should be meet the needs of users.So do need analysis in order to understand users’need,to make textbook to better reflect the correlation and the practicability.Only textbook writers to fully understand the users’(students,teachers,education department managers)actual demands of teaching textbook,they would be able to write out the applicable materials.

  7. Atlantic Marine Mammal Assessment Vessel Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data sets are a compilation of large vessel surveys for marine mammal stock assessments in South Atlantic (Florida to Maryland) waters from 1994 to the...

  8. 22 CFR 519.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-04-01

    ... Relations of the Senate and the Committee on Foreign Affairs of the House of Representatives or the... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 519.600 Section 519.600 Foreign Relations BROADCASTING BOARD OF GOVERNORS NEW RESTRICTIONS ON LOBBYING Agency Reports § 519.600...

  9. 15 CFR 28.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-01-01

    ... the Committee on Foreign Relations of the Senate and the Committee on Foreign Affairs of the House of... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Semi-annual compilation. 28.600 Section 28.600 Commerce and Foreign Trade Office of the Secretary of Commerce NEW RESTRICTIONS ON LOBBYING...

  10. 22 CFR 138.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-04-01

    ... Relations of the Senate and the Committee on Foreign Affairs of the House of Representatives or the... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Semi-annual compilation. 138.600 Section 138.600 Foreign Relations DEPARTMENT OF STATE MISCELLANEOUS NEW RESTRICTIONS ON LOBBYING Agency Reports...

  11. 22 CFR 712.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-04-01

    ... the Committee on Foreign Relations of the Senate and the Committee on Foreign Affairs of the House of... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 712.600 Section 712.600 Foreign Relations OVERSEAS PRIVATE INVESTMENT CORPORATION ADMINISTRATIVE PROVISIONS NEW RESTRICTIONS ON...

  12. 22 CFR 311.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-04-01

    ... Senate and the Committee on Foreign Affairs of the House of Representatives or the Committees on Armed... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual...

  13. 22 CFR 227.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-04-01

    ... Relations of the Senate and the Committee on Foreign Affairs of the House of Representatives or the... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports...

  14. 13 CFR 146.600 - Semi-annual compilation.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee on...

  15. Effective Compiler Error Message Enhancement for Novice Programming Students

    Science.gov (United States)

    Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine

    2016-01-01

    Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…

  16. A Journey from Interpreters to Compilers and Virtual Machines

    DEFF Research Database (Denmark)

    Danvy, Olivier

    2003-01-01

    We review a simple sequence of steps to stage a programming-language interpreter into a compiler and virtual machine. We illustrate the applicability of this derivation with a number of existing virtual machines, mostly for functional languages. We then outline its relevance for todays language...

  17. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    Science.gov (United States)

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  18. Compilation of a global inventory of emissions of nitrous oxide.

    NARCIS (Netherlands)

    Bouwman, A.F.

    1995-01-01

    A global inventory with 1°x1° resolution was compiled of emissions of nitrous oxide (N 2 O) to the atmosphere, including emissions from soils under natural vegetation, fertilized agricultural land, grasslands and animal excreta, biomass burning, forest clearing, oceans, fossil fuel and bi

  19. Updated site compilation of the Latin American Pollen Database

    NARCIS (Netherlands)

    S.G.A. Flantua; H. Hooghiemstra; E.C. Grimm; H. Behling; M.B Bush; C. González-Arrango; W.D. Gosling; M.-P. Ledru; S. Lozano-Garciá; A. Maldonado; A.R. Prieto; V. Rull; J.H. van Boxel

    2015-01-01

    The updated inventory of the Latin American Pollen Database (LAPD) offers a wide range of new insights. This paper presents a systematic compilation of palynological research in Latin America. A comprehensive inventory of publications in peer-reviewed and grey literature shows a major expansion of s

  20. Compiler Optimization Techniques for OpenMP Programs

    Directory of Open Access Journals (Sweden)

    Shigehisa Satoh

    2001-01-01

    Full Text Available We have developed compiler optimization techniques for explicit parallel programs using the OpenMP API. To enable optimization across threads, we designed dataflow analysis techniques in which interactions between threads are effectively modeled. Structured description of parallelism and relaxed memory consistency in OpenMP make the analyses effective and efficient. We developed algorithms for reaching definitions analysis, memory synchronization analysis, and cross-loop data dependence analysis for parallel loops. Our primary target is compiler-directed software distributed shared memory systems in which aggressive compiler optimizations for software-implemented coherence schemes are crucial to obtaining good performance. We also developed optimizations applicable to general OpenMP implementations, namely redundant barrier removal and privatization of dynamically allocated objects. Experimental results for the coherency optimization show that aggressive compiler optimizations are quite effective for a shared-write intensive program because the coherence-induced communication volume in such a program is much larger than that in shared-read intensive programs.

  1. 5 CFR 9701.524 - Compilation and publication of data.

    Science.gov (United States)

    2010-01-01

    ... MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation and... agreements and arbitration decisions and publish the texts of its impasse resolution decisions and...

  2. Calculating Certified Compilers for Non-deterministic Languages

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2015-01-01

    Reasoning about programming languages with non-deterministic semantics entails many difficulties. For instance, to prove correctness of a compiler for such a language, one typically has to split the correctness property into a soundness and a completeness part, and then prove these two parts...

  3. Compilation of a global inventory of emissions of nitrous oxide

    NARCIS (Netherlands)

    Bouwman, A.F.

    1995-01-01

    A global inventory with 1°x1° resolution was compiled of emissions of nitrous oxide (N 2 O) to the atmosphere, including emissions from soils under natural vegetation, fertilized agricultural land, grasslands and animal excreta, biomass burning, forest clearing,

  4. Compilation of historical information of 300 Area facilities and activities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, M.S.

    1992-12-01

    This document is a compilation of historical information of the 300 Area activities and facilities since the beginning. The 300 Area is shown as it looked in 1945, and also a more recent (1985) look at the 300 Area is provided.

  5. Experience with PASCAL compilers on mini-computers

    CERN Document Server

    Bates, D

    1977-01-01

    This paper relates the history of an implementation of the language PASCAL on a minicomputer. The unnecessary difficulties encountered on the way, led the authors to reflect on the distribution of 'portable' compilers in general and suggest some guidelines for the future. (4 refs).

  6. Final report: Compiled MPI. Cost-Effective Exascale Application Development

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, William Douglas [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2015-12-21

    This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.

  7. Indexed compilation of experimental high energy physics literature. [Synopsis

    Energy Technology Data Exchange (ETDEWEB)

    Horne, C.P.; Yost, G.P.; Rittenberg, A.

    1978-09-01

    An indexed compilation of approximately 12,000 experimental high energy physics documents is presented. A synopsis of each document is presented, and the documenta are indexed according to beam/target/momentum, reaction/momentum, final-state-particle, particle/particle-property, accelerator/detector, and (for a limited set of the documents) experiment. No data are given.

  8. An Extensible Open-Source Compiler Infrastructure for Testing

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Ur, S; Vuduc, R

    2005-12-09

    Testing forms a critical part of the development process for large-scale software, and there is growing need for automated tools that can read, represent, analyze, and transform the application's source code to help carry out testing tasks. However, the support required to compile applications written in common general purpose languages is generally inaccessible to the testing research community. In this paper, we report on an extensible, open-source compiler infrastructure called ROSE, which is currently in development at Lawrence Livermore National Laboratory. ROSE specifically targets developers who wish to build source-based tools that implement customized analyses and optimizations for large-scale C, C++, and Fortran90 scientific computing applications (on the order of a million lines of code or more). However, much of this infrastructure can also be used to address problems in testing, and ROSE is by design broadly accessible to those without a formal compiler background. This paper details the interactions between testing of applications and the ways in which compiler technology can aid in the understanding of those applications. We emphasize the particular aspects of ROSE, such as support for the general analysis of whole programs, that are particularly well-suited to the testing research community and the scale of the problems that community solves.

  9. Natural Gas Monthly August 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. Explanatory notes supplement the information found in tables of the report. A description of the data collection surveys that support the NGM is provided. A glossary of the terms used in this report is also provided to assist readers in understanding the data presented in this publication.

  10. Natural gas monthly, October 1991

    Energy Technology Data Exchange (ETDEWEB)

    1991-11-05

    The Natural Gas Monthly (NGM) is prepared in the Data Operations Branch of the Reserves and Natural Gas Division, Office of Oil and Gas, Energy Information Administration (EIA), US Department of Energy (DOE). The NGM highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. The data in this publication are collected on surveys conducted by the EIA to fulfill its responsibilities for gathering and reporting energy data. Some of the data are collected under the authority of the Federal Energy Regulatory Commission (FERC), an independent commission within the DOE, which has jurisdiction primarily in the regulation of electric utilities and the interstate natural gas industry. Geographic coverage is the 50 States and the District of Columbia. 16 figs., 33 tabs.

  11. A Language for Specifying Compiler Optimizations for Generic Software

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, Jeremiah J. [Indiana Univ., Bloomington, IN (United States)

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allow the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.

  12. Bias Corrected Spatially Downscaled Monthly CMIP5 Climate Projections

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This archive contains 234 projections of monthly BCSD CMIP5 projections of precipitation and monthly means of daily-average, daily maximum and daily minimum...

  13. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    Directory of Open Access Journals (Sweden)

    R. Li

    2015-11-01

    Full Text Available Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  14. Hispanic Heritage Month

    Science.gov (United States)

    York, Sherry

    2004-01-01

    Hispanic heritage month is from September 15 to October 15. One problem that arises when grouping people into categories such as Hispanic or Latino is stereotyping, stereotypes can be promoted or used in this Hispanic month to promote a greater understanding of Latino cultures.

  15. Progress report, 24 months

    DEFF Research Database (Denmark)

    Juhl, Thomas Winther; Nielsen, Jakob Skov

    The work performed during the past 12 months (months 13 – 24) of the project has included the conclusion of Task 1 – Fundamental Studies and Task 2 – Multimirror Cutting Head Design. Work on Task 3 – Compact Cutting Head Design, and Task 4 – Interface Design has been carried out and the tests...... of the multimirror cutting head have been started....

  16. Progress report, 36 months

    DEFF Research Database (Denmark)

    Juhl, Thomas Winther; Nielsen, Jakob Skov

    The work performed during the past 12 months (months 13 – 24) of the project has included the conclusion of Task 1 – Fundamental Studies and Task 2 – Multimirror Cutting Head Design. Work on Task 3 – Compact Cutting Head Design, and Task 4 – Interface Design has been carried out and the tests...... of the multimirror cutting head have been started....

  17. Progress report, 36 months

    DEFF Research Database (Denmark)

    Juhl, Thomas Winther; Nielsen, Jakob Skov

    The work performed during the past 12 months (months 13 – 24) of the project has included the conclusion of Task 1 – Fundamental Studies and Task 2 – Multimirror Cutting Head Design. Work on Task 3 – Compact Cutting Head Design, and Task 4 – Interface Design has been carried out and the tests of ...

  18. Progress report, 24 months

    DEFF Research Database (Denmark)

    Juhl, Thomas Winther; Nielsen, Jakob Skov

    The work performed during the past 12 months (months 13 – 24) of the project has included the conclusion of Task 1 – Fundamental Studies and Task 2 – Multimirror Cutting Head Design. Work on Task 3 – Compact Cutting Head Design, and Task 4 – Interface Design has been carried out and the tests of ...

  19. Survey and research on feeding behavior of infants and toddlers aged between 2 months and 36 months in Shanghai%上海地区2~36个月婴幼儿进食行为调查研究

    Institute of Scientific and Technical Information of China (English)

    徐琼; 徐秀; 刘静; 鲁萍; 燕东雍

    2011-01-01

    [Objective] To obtain a systematic insight into feeding behavior of infants and toddlers in Shanghai. Main indicators included feeding behaviors, feeding activities by feeders and development of feeding skills of infants and toddlers.[Methods] The research adopted a cluster sampling method to conduct the questionnaire survey on 960 healthy infants and toddlers aged between 2 months and 36 months at several neighborhoods in six districts. The study recouped 873 com-pletely-filled-in and effective questionnaires and used the SPSS 11. 5 software for analysis. [Results] 42. 7% of responding parents believed that their children had feeding problems. The main feeding problems of infants included eating too little (38.0%), picky eating and finicky eating (21. 4%) as well as nausea and vomiting (19. 8%). The main feeding problems of children included picky eating and finicky eating (39. 5%), eating too little (34. 3%) and eating too slowly (32. 8%). The main feeding problems included longer mealtimes, high frequency of meals, inappropriate feeding position as well as feeders' poor feeding techniques. Meanwhile, the research also found out that infants and toddlers of all ages lagged in acquiring feeding skills than those listed on the textbook. In terms of skills such as finger feeding, feeding self with help, drinking from cup with help, drinking from cup without assistance, infants and toddlers admitted in this study apparently lagged their counterparts listed on the textbook. The differences featured statistical significance (P<0. 05). [Conclusion]Changing the unreasonable frequency of meals, duration of meals and feeding behaviors while promoting development of feeding skills of infants and toddlers in a proper manner will help reduce the occurrence of feeding problems and boost normal development of feeding behavior of infants and toddlers.%[目的]了解目前上海地区婴幼儿的进食行为状况:主要进食行为问题,喂养人的喂养行为,婴幼儿

  20. Petroleum supply monthly, March 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-30

    Data presented in the Petroleum Supply Monthly (PSM) describe the supply and disposition of petroleum products in the United States and major US geographic regions. The data series describe production, imports and exports, inter-Petroleum Administration for Defense (PAD) District movements, and inventories by the primary suppliers of petroleum products in the United States (50 States and the District of Columbia). The reporting universe includes those petroleum sectors in primary supply. Included are: petroleum refiners, motor gasoline blenders, operators of natural gas processing plants and fractionators, inter-PAD transporters, importers, and major inventory holders of petroleum products and crude oil. When aggregated, the data reported by these sectors approximately represent the consumption of petroleum products in the United States. Data presented in the PSM are divided into two sections: Summary Statistics and Detailed Statistics. The tables and figures in the Summary Statistics section of the PSM present a time series of selected petroleum data on a US level. Most time series include preliminary estimates for one month based on the Weekly Petroleum Supply Reporting System; statistics based on the most recent data from the Monthly Petroleum Supply Reporting System (MPSRS); and statistics published in prior issues of the PSM and PSA. The Detailed Statistics tables of the PSM present statistics for the most current month available as well as year-to-date. In most cases, the statistics are presented for several geographic areas -- the United States (50 States and the District of Columbia), five PAD Districts, and 12 Refining Districts. At the US and PAD District level, the total volume and the daily rate of activities are presented. The statistics are developed from monthly survey forms submitted by respondents to the EIA and from data provided from other sources.

  1. Petroleum supply monthly, June 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-06-28

    Data presented in the Petroleum Supply Monthly (PSM) describe the supply and disposition of petroleum products in the United States and major US geographic regions. The data series describe production, imports and exports, inter-Petroleum Administration for Defense (PAD) District movements, and inventories by the primary suppliers of petroleum products in the United States (50 States and the District of Columbia). The reporting universe includes those petroleum sectors in primary supply. Included are: petroleum refiners, motor gasoline blenders, operators of natural gas processing plants and fractionators, inter-PAD transporters, importers, and major inventory holders of petroleum products and crude oil. When aggregated, the data reported by these sectors approximately represent the consumption of petroleum products in the United States. Data presented in the PSM are divided into two sections: Summary Statistics and Detailed Statistics. The tables and figures ih the Summary Statistics section of the PSM present a time series of selected petroleum data on a US level. Most time series include preliminary estimates for one month based on the Weekly Petroleum Supply Reporting System; statistics based on the most recent data from the Monthly Petroleum Supply Reporting System (MPSRS); and statistics published in prior issues of the PSM and PSA. The Detailed Statistics tables of the PSM present statistics for the most current month available as well as year-to-date. In most cases, the statistics are presented for several geographic areas - - the United States (50 States and the District of Columbia), five PAD Districts, and 12 Refining Districts. At the US and PAD District level, the total volume and the daily rate of activities are presented. The statistics are developed from monthly survey forms submitted by respondents to the EIA and from data provided firom other sources.

  2. Electric power monthly, July 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    The Electric Power Monthly (EPM) presents monthly electricity statistics. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data in this report are presented for a wide audience including Congress, Federal and State agencies, the electric utility industry, and the general public. The EIA collected the information in this report to fulfill its data collection and dissemination responsibilities as specified in the Federal Energy Administration Act of 1974 (Public Law 93-275) as amended. The EPM is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), Department of Energy. This publication provides monthly statistics at the US, Census division, and State levels for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions. Statistics by company and plant are published in the EPM on the capability of new generating units, net generation, fuel consumption, fuel stocks, quantity and quality of fuel, and cost of fossil fuels. Data on quantity, quality, and cost of fossil fuels lag data on net generation, fuel consumption, fuel stocks, electricity sales, and average revenue per kilowatthour by 1 month. This difference in reporting appears in the US, Census division, and State level tables. However, for purposes of comparison, plant-level data are presented for the earlier month.

  3. Database compilation for the geologic map of the San Francisco volcanic field, north-central Arizona

    Science.gov (United States)

    Bard, Joseph A.; Ramsey, David W.; Wolfe, Edward W.; Ulrich, George E.; Newhall, Christopher G.; Moore, Richard B.; Bailey, Norman G.; Holm, Richard F.

    2016-01-08

    The main component of this publication is a geologic map database prepared using geographic information system (GIS) applications. The geodatabase of geologic points, lines, and polygons was produced as a compilation from five adjoining map sections originally published as printed maps in 1987 (see references in metadata). Four of the sections (U.S. Geological Survey Miscellaneous Field Studies Maps MF–1957, MF–1958, MF–1959, MF–1960) were created by scanning and geo-referencing stable base map material consisting of mylar positives. The final section (MF–1956) was compiled by hand tracing an enlargement of the available printed paper base map onto mylar using a #00 rapidograph pen, the mylar positive was then digitally scanned and geo-referenced. This method was chosen because the original basemap materials (mylar positives) for the MF–1956 section were unavailable at the time of this publication. Due to the condition of the available MF–1956 map section used as the base (which had previously been folded) the accuracy within the boundary of the MF–1956 section is presumed to be degraded in certain areas. The locations of the degraded areas and the degree of degradation within these areas is unclear. Final compilation of the database was completed using the ArcScan toolset, and the Editor toolset in ESRI ArcMap 10.1. Polygon topology was created from the lines and labels were added to the resultant geological polygons, lines, and points. Joseph A. Bard and David W. Ramsey updated and corrected the geodatabase, created the metadata and web presence, and provided the GIS-expertise to bring the geodatabase and metadata to completion. Included are links to files to view or print the original map sheets and the accompanying pamphlets.

  4. Function Interface Models for Hardware Compilation: Types, Signatures, Protocols

    CERN Document Server

    Ghica, Dan R

    2009-01-01

    The problem of synthesis of gate-level descriptions of digital circuits from behavioural specifications written in higher-level programming languages (hardware compilation) has been studied for a long time yet a definitive solution has not been forthcoming. The argument of this essay is mainly methodological, bringing a perspective that is informed by recent developments in programming-language theory. We argue that one of the major obstacles in the way of hardware compilation becoming a useful and mature technology is the lack of a well defined function interface model, i.e. a canonical way in which functions communicate with arguments. We discuss the consequences of this problem and propose a solution based on new developments in programming language theory. We conclude by presenting a prototype implementation and some examples illustrating our principles.

  5. Compiler analysis for irregular problems in FORTRAN D

    Science.gov (United States)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  6. Efficient Integration of Pipelined IP Blocks into Automatically Compiled Datapaths

    Directory of Open Access Journals (Sweden)

    Andreas Koch

    2006-12-01

    Full Text Available Compilers for reconfigurable computers aim to generate problem-specific optimized datapaths for kernels extracted from an input language. In many cases, however, judicious use of preexisting manually optimized IP blocks within these datapaths could improve the compute performance even further. The integration of IP blocks into the compiled datapaths poses a different set of problems than stitching together IPs to form a system-on-chip; though, instead of the loose coupling using standard busses employed by SoCs, the one between datapath and IP block must be much tighter. To this end, we propose a concise language that can be efficiently synthesized using a template-based approach for automatically generating lightweight data and control interfaces at the datapath level.

  7. Efficient Integration of Pipelined IP Blocks into Automatically Compiled Datapaths

    Directory of Open Access Journals (Sweden)

    Koch Andreas

    2007-01-01

    Full Text Available Compilers for reconfigurable computers aim to generate problem-specific optimized datapaths for kernels extracted from an input language. In many cases, however, judicious use of preexisting manually optimized IP blocks within these datapaths could improve the compute performance even further. The integration of IP blocks into the compiled datapaths poses a different set of problems than stitching together IPs to form a system-on-chip; though, instead of the loose coupling using standard busses employed by SoCs, the one between datapath and IP block must be much tighter. To this end, we propose a concise language that can be efficiently synthesized using a template-based approach for automatically generating lightweight data and control interfaces at the datapath level.

  8. Twelve tips on how to compile a medical educator's portfolio.

    Science.gov (United States)

    Dalton, Claudia Lucy; Wilson, Anthony; Agius, Steven

    2017-09-17

    Medical education is an expanding area of specialist interest for medical professionals. Whilst most doctors will be familiar with the compilation of clinical portfolios for scrutiny of their clinical practice and provision of public accountability, teaching portfolios used specifically to gather and demonstrate medical education activity remain uncommon in many non-academic settings. For aspiring and early career medical educators in particular, their value should not be underestimated. Such a medical educator's portfolio (MEP) is a unique compendium of evidence that is invaluable for appraisal, revalidation, and promotion. It can stimulate and provide direction for professional development, and is a rich source for personal reflection and learning. We recommend that all new and aspiring medical educators prepare an MEP, and suggest twelve tips on how to skillfully compile one.

  9. Compilation of Non-Financial Balances in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Vítězslav Ondruš

    2011-09-01

    Full Text Available The System of National Accounts in the Czech Republic consists of three main parts — institutional sector accounts, input-output tables and balances of non-financial assets. All three parts are compiled interactively by common time schedule. The article deals with balances of non-financial assets and their relation to core institutional sector accounts and explains why the third parallel part of SNA in the Czech Republic was build, describes its weaknesses and future development.

  10. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    Science.gov (United States)

    2015-04-10

    Workshop on Libraries and Automatic Tuning for Extreme Scale Systems, Lake Tahoe, CA. August 2011. J. Ramanujam, “The Tensor Contraction Engine...Hartono, M. Baskaran, L.-N. Pouchet, J. Ramanujam, and P. Sadayappan, “ Parametric Tiling of Affine Loop Nests,” in 15th Workshop on Compilers for Parallel... Parametric Tiling for Autotuning,” in Workshop on Parallel Matrix Algorithms and Applications (PMAA 2010), Basel, Switzerland, July 2010. J. Ramanujam

  11. Analysis on Establishing Urban Cemetery Planning and Compiling System

    Institute of Scientific and Technical Information of China (English)

    Kun; YANG; Xiaogang; CHEN

    2015-01-01

    Currently,there are many problems in construction of urban cemetery like improper location,low land utilization,backward greening facilities and imperfect cemetery management,which have greatly affected people’s normal production and life. This article discusses the establishment of a sustainable city cemetery planning and compiling system from three levels of " macro-view,medium-view and micro-view" in order to perfect the present cemetery system.

  12. An Adaptation of the ADA Language for Machine Generated Compilers.

    Science.gov (United States)

    1980-12-01

    Ada Augusta, Lady Lovelace , the daughter of the poet, Lord Byron, and Charles Babbage’s programmer.# 2UNIX is a Trademark/Service Mark of the Bell...AN ADAPTATION OF THE ADA LANGUAGE FOR MACHINE GENERATED COMPILE-ETC(U) JNLSIIO DEC AG M A ROGERS, L P MYERS 7k .A9 22NVLPSTRDASHOOLMONEREYCAF EE9...mmhhhhhhmhhhhlLEhhhhhmmh LEV EU NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ~ELECTEf All 0 3 198 /12 )THESIS 7 ,AN *DAPTATION OF THE ADA

  13. Recent Efforts in Data Compilations for Nuclear Astrophysics

    CERN Document Server

    Dillmann, I

    2008-01-01

    Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on "Nuclear Physics Data Compilation for Nucleosynthesis Modeling" held at the ECT* in Trento/ Italy from May 29th- June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The "JINA Reaclib Database" on http://www.nscl.msu.edu/\\~nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS. The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections...

  14. Construction experiences from underground works at Forsmark. Compilation Report

    Energy Technology Data Exchange (ETDEWEB)

    Carlsson, Anders [Vattenfall Power Consultant AB, Stockholm (Sweden); Christiansson, Rolf [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)

    2007-02-15

    The main objective with this report, the Construction Experience Compilation Report (CECR), is to compile experiences from the underground works carried out at Forsmark, primarily construction experiences from the tunnelling of the two cooling water tunnels of the Forsmark nuclear power units 1, 2 and 3, and from the underground excavations of the undersea repository for low and intermediate reactor waste, SFR. In addition, a brief account is given of the operational experience of the SFR on primarily rock support solutions. The authors of this report have separately participated throughout the entire construction periods of the Forsmark units and the SFR in the capacity of engineering geologists performing geotechnical mapping of the underground excavations and acted as advisors on tunnel support; Anders Carlsson participated in the construction works of the cooling water tunnels and the open cut excavations for Forsmark 1, 2 and 3 (geotechnical mapping) and the Forsmark 3 tunnel (advise on tunnel support). Rolf Christiansson participated in the underground works for the SFR (geotechnical mapping, principal investigator for various measurements and advise on tunnel support and grouting). The report is to a great extent based on earlier published material as presented in the list of references. But it stands to reason that, during the course of the work with this report, unpublished notes, diaries, drawings, photos and personal recollections of the two authors have been utilised in order to obtain such a complete compilation of the construction experiences as possible.

  15. Fault-tolerant digital microfluidic biochips compilation and synthesis

    CERN Document Server

    Pop, Paul; Stuart, Elena; Madsen, Jan

    2016-01-01

    This book describes for researchers in the fields of compiler technology, design and test, and electronic design automation the new area of digital microfluidic biochips (DMBs), and thus offers a new application area for their methods.  The authors present a routing-based model of operation execution, along with several associated compilation approaches, which progressively relax the assumption that operations execute inside fixed rectangular modules.  Since operations can experience transient faults during the execution of a bioassay, the authors show how to use both offline (design time) and online (runtime) recovery strategies. The book also presents methods for the synthesis of fault-tolerant application-specific DMB architectures. ·         Presents the current models used for the research on compilation and synthesis techniques of DMBs in a tutorial fashion; ·         Includes a set of “benchmarks”, which are presented in great detail and includes the source code of most of the t...

  16. TARP Monthly Housing Scorecard

    Data.gov (United States)

    Department of the Treasury — Treasury and the U.S. Department of Housing and Urban Development (HUD) jointly produce a Monthly Housing Scorecard on the health of the nation’s housing market. The...

  17. Lightship Monthly Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Daily Weather Observations (Monthly Form 1001) from lightship stations in the United States. Please see the 'Surface Weather Observations (1001)' library for more...

  18. Oceanographic Monthly Summary

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Oceanographic Monthly Summary contains sea surface temperature (SST) analyses on both regional and ocean basin scales for the Atlantic, Pacific, and Indian Oceans....

  19. Individual external dose monitoring of all citizens of Date City by passive dosimeter 5 to 51 months after the Fukushima NPP accident (series): 1. Comparison of individual dose with ambient dose rate monitored by aircraft surveys.

    Science.gov (United States)

    Miyazaki, Makoto; Hayano, Ryugo

    2016-12-06

    Date (da'te) City in Fukushima Prefecture has conducted a population-wide individual dose monitoring program after the Fukushima Daiichi Nuclear Power Plant Accident, which provides a unique and comprehensive data set of the individual doses of citizens. The purpose of this paper, the first in the series, is to establish a method for estimating effective doses based on the available ambient dose rate survey data. We thus examined the relationship between the individual external doses and the corresponding ambient doses assessed from airborne surveys. The results show that the individual doses were about 0.15 times the ambient doses, the coefficient of 0.15 being a factor of 4 smaller than the value employed by the Japanese government, throughout the period of the airborne surveys used. The method obtained in this study could aid in the prediction of individual doses in the early phase of future radiological accidents involving large-scale contamination.

  20. A ‘Social Form Of Knowledge’ in Practice: Unofficial Compiling of 1960s Pop Music on CD-R

    Directory of Open Access Journals (Sweden)

    Paul Martin

    2012-01-01

    Full Text Available In this article I explore the ‘unofficial’ (and technically illegal compiling of marginally known 1960s pop records on Compact Disc Recordable (CD-R. I do so by situating it within the proposition by the late Raphael Samuel, that history is ‘social knowledge’ and a practice rather than a profession. I propose that this compiling activity exemplifies this proposition. The core of the paper is centred on a 2007 survey which I conducted via three on-line 1960s music enthusiast discussion forums. I draw on the sixteen responses to demonstrate how the motivations, values and intentions of those respondents engaging in the practice of CD-R compiling are historically and socially centred. In doing so, I seek to problematise the music industry’s undifferentiated condemnation of all copying as theft. I do so by showing how, far from stealing, these CD-R compilers are adding to the musical social knowledge of 1960s pop and rock music. I further situate them within a longer lineage of ‘unofficial listening’ dating back to at least the 1930s. In using the term ‘unofficial’ in both a legal and public historical sense (eg to take issue with a received narrative, I point to wider definitions of what historically has or has not been musically ‘official’ to listen to. I seek also to point to the practice of CD-R compiling as a historical ‘moment’ in technological change, which might otherwise go unremarked upon as the CD-R itself heads towards utilitarian obsolescence. Although, the issues and concepts raised in the paper can be little more than pointed to, it is hoped it might act as one platform for the historical engagement with a subject more commonly discussed in sociological terms. As public historians we should be reflexive and inter-disciplinary and it is with this mind set that this article is written.

  1. A ‘Social Form Of Knowledge’ in Practice: Unofficial Compiling of 1960s Pop Music on CD-R

    Directory of Open Access Journals (Sweden)

    Paul Martin

    2012-01-01

    Full Text Available In this article I explore the ‘unofficial’ (and technically illegal compiling of marginally known 1960s pop records on Compact Disc Recordable (CD-R. I do so by situating it within the proposition by the late Raphael Samuel, that history is ‘social knowledge’ and a practice rather than a profession. I propose that this compiling activity exemplifies this proposition. The core of the paper is centred on a 2007 survey which I conducted via three on-line 1960s music enthusiast discussion forums. I draw on the sixteen responses to demonstrate how the motivations, values and intentions of those respondents engaging in the practice of CD-R compiling are historically and socially centred. In doing so, I seek to problematise the music industry’s undifferentiated condemnation of all copying as theft. I do so by showing how, far from stealing, these CD-R compilers are adding to the musical social knowledge of 1960s pop and rock music. I further situate them within a longer lineage of ‘unofficial listening’ dating back to at least the 1930s. In using the term ‘unofficial’ in both a legal and public historical sense (eg to take issue with a received narrative, I point to wider definitions of what historically has or has not been musically ‘official’ to listen to. I seek also to point to the practice of CD-R compiling as a historical ‘moment’ in technological change, which might otherwise go unremarked upon as the CD-R itself heads towards utilitarian obsolescence. Although, the issues and concepts raised in the paper can be little more than pointed to, it is hoped it might act as one platform for the historical engagement with a subject more commonly discussed in sociological terms. As public historians we should be reflexive and inter-disciplinary and it is with this mind set that this article is written.

  2. Petroleum marketing monthly with data for April 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-01

    This publication provides information and statistical data on a variety of crude oil costs and refined petroleum products sales. Data on crude oil include the domestic first purchase price, the free on board price and landed cost of imported crude oil, and the refiners` acquisition cost of crude oil. Refined petroleum product sales data include motor gasoline, distillates, residuals, aviation fuels, kerosene, and propane. The data provided are compiled from six Energy Information Administration survey forms. 50 tabs.

  3. Compilation of Water-Resources Data for Montana, Water Year 2006

    Science.gov (United States)

    Ladd, P. B.; Berkas, W.R.; White, M.K.; Dodge, K.A.; Bailey, F.A.

    2007-01-01

    The U.S. Geological Survey, Montana Water Science Center, in cooperation with other Federal, State, and local agencies, and Tribal governments, collects a large amount of data pertaining to the water resources of Montana each water year. This report is a compilation of Montana site-data sheets for the 2006 water year, which consists of records of stage and discharge of streams; water quality of streams and ground water; stage and contents of lakes and reservoirs; water levels in wells; and precipitation data. Site-data sheets for selected stations in Canada and Wyoming also are included in this report. The data for Montana, along with data from various parts of the Nation, are included in 'Water-Resources Data for the United States, Water Year 2006', which is published as U.S. Geological Survey Water-Data Report WDR-US-2006 and is available at http://pubs.water.usgs.gov/wdr2006. Additional water year 2006 data collected at crest-stage gage and miscellaneous-measurement stations were collected but were not published. These data are stored in files of the U.S. Geological Survey Montana Water Science Center in Helena, Montana, and are available on request.

  4. Combining Compile-Time and Run-Time Parallelization

    Directory of Open Access Journals (Sweden)

    Sungdo Moon

    1999-01-01

    Full Text Available This paper demonstrates that significant improvements to automatic parallelization technology require that existing systems be extended in two ways: (1 they must combine high‐quality compile‐time analysis with low‐cost run‐time testing; and (2 they must take control flow into account during analysis. We support this claim with the results of an experiment that measures the safety of parallelization at run time for loops left unparallelized by the Stanford SUIF compiler’s automatic parallelization system. We present results of measurements on programs from two benchmark suites – SPECFP95 and NAS sample benchmarks – which identify inherently parallel loops in these programs that are missed by the compiler. We characterize remaining parallelization opportunities, and find that most of the loops require run‐time testing, analysis of control flow, or some combination of the two. We present a new compile‐time analysis technique that can be used to parallelize most of these remaining loops. This technique is designed to not only improve the results of compile‐time parallelization, but also to produce low‐cost, directed run‐time tests that allow the system to defer binding of parallelization until run‐time when safety cannot be proven statically. We call this approach predicated array data‐flow analysis. We augment array data‐flow analysis, which the compiler uses to identify independent and privatizable arrays, by associating predicates with array data‐flow values. Predicated array data‐flow analysis allows the compiler to derive “optimistic” data‐flow values guarded by predicates; these predicates can be used to derive a run‐time test guaranteeing the safety of parallelization.

  5. Benchmarking Domain-Specific Compiler Optimizations for Variational Forms

    CERN Document Server

    Kirby, Robert C

    2012-01-01

    We examine the effect of using complexity-reducing relations to generate optimized code for the evaluation of finite element variational forms. The optimizations are implemented in a prototype code named FErari, which has been integrated as an optimizing backend to the FEniCS Form Compiler, FFC. In some cases, FErari provides very little speedup, while in other cases, we obtain reduced local operation counts of a factor of as much as 7.9 and speedups for the assembly of the global sparse matrix of as much as a factor of 2.8.

  6. Computer programs: Information retrieval and data analysis, a compilation

    Science.gov (United States)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  7. A Pad Router for the Monterey Silicon Compiler

    Science.gov (United States)

    1988-03-01

    program and the final chip layout. In 1986, M. A. Malagon -Fajar [Ref. 6o completed a valuable study on the relationship between the compiler and its laywut...the first SC’?\\I)S i cells. and E. Malagon [Ref. 9] described the structure of the data-path and inserted 1.. the first SCMOS organelles. That same...organelles are stacked vertically to form a unit. A description of the MacPitts data-path design and rout- ing organization is presented by E. Malagon [Ref

  8. 严谨莫如辞书 千虑不可一失%Impeccable classic dictionary expects to be compiled with discretion

    Institute of Scientific and Technical Information of China (English)

    谭建农

    2001-01-01

    This paper presents a brief survey on the status quo of domestically published bilanguage dictionaries in English and Chinese. It consists of three parts. Tart one is concerned with the author's concept on impeccable classic dictionary. The second part specifies the mistakes in some of the bilanguage dictionaries and makes analyses of them. In the third part the author stresses the view - point that the first important thing for dictionary compilers to do is to comprehend fully whtat dictionary is. Only when compilers make penetrating comprehension of it can they pay painstaking efforts in dictionary- editing work, and can impeccable classic dictionaries be produced.

  9. Monthly energy review

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    This document presents an overview of the Energy Information Administration`s (EIA) recent monthly energy statistics. The statistics cover the major activities of U.S. production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors.

  10. Photos of the month

    CERN Multimedia

    Claudia Marcelloni de Oliveira

    Congratulations to Adele Rimoldi, ATLAS physicist from Pavia, who ran her first marathon in New York last month. Adele completed the 42.2 km in a time of 4:49:19. She sure makes it look easy!!! The ATLAS pixel service quarter panel in SR1

  11. Monthly Energy Review

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-05-28

    This publication presents an overview of the Energy information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. Two brief ``energy plugs`` (reviews of EIA publications) are included, as well.

  12. Individual External Dose Monitoring of All Citizens of Date City by Passive Dosimeter 5 to 52 Months After the Fukushima NPP Accident (series): 1. Comparison of Individual Dose with Ambient Dose Rate Monitored by Aircraft Surveys

    CERN Document Server

    Hayano, Ryugo

    2016-01-01

    Date (d\\textschwa 'te) City in Fukushima Prefecture has conducted a population-wide individual dose monitoring program after the Fukushima Daiichi Nuclear Power Plant Accident, which provides a unique and comprehensive data set of the individual doses of citizens. The relationship between the individual doses and the corresponding ambient doses assessed from airborne surveys was examined. The results show that the individual doses were about 0.15 times the ambient doses, which were a quarter of the value employed by the Japanese government, throughout the period of the airborne surveys used. The knowledge obtained in this study could enable the prediction of individual doses in the early phase of future radiological accidents involving large-scale contamination.

  13. An advanced compiler designed for a VLIW DSP for sensors-based systems.

    Science.gov (United States)

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors.

  14. An Advanced Compiler Designed for a VLIW DSP for Sensors-Based Systems

    Directory of Open Access Journals (Sweden)

    Hu He

    2012-04-01

    Full Text Available The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors.

  15. Public Seagrass Compilation for West Coast Essential Fish Habitat (EFH) Environmental Impact Statement

    Data.gov (United States)

    Pacific States Marine Fisheries Commission — These data are a compilation of currently available seagrass GIS data sets for the west coast of the United States. These data have been compiled from seventeen...

  16. Month of Birth and Children's Health in India

    Science.gov (United States)

    Lokshin, Michael; Radyakin, Sergiy

    2012-01-01

    We use data from three waves of India National Family Health Survey to explore the relationship between the month of birth and the health outcomes of young children in India. We find that children born during the monsoon months have lower anthropometric scores compared to children born during the fall-winter months. We propose and test hypotheses…

  17. Your Child's Development: 9 Months

    Science.gov (United States)

    ... For Parents MORE ON THIS TOPIC Your Baby's Growth: 9 Months Your Baby's Hearing, Vision, and Other Senses: 9 Months Your Child's Checkup: 9 Months Medical Care and Your 8- to 12-Month-Old Feeding Your 8- to 12-Month-Old Sleep and Your 8- to 12-Month-Old Contact ...

  18. National Energy Strategy: A compilation of public comments; Interim Report

    Energy Technology Data Exchange (ETDEWEB)

    1990-04-01

    This Report presents a compilation of what the American people themselves had to say about problems, prospects, and preferences in energy. The Report draws on the National Energy Strategy public hearing record and accompanying documents. In all, 379 witnesses appeared at the hearings to exchange views with the Secretary, Deputy Secretary, and Deputy Under Secretary of Energy, and Cabinet officers of other Federal agencies. Written submissions came from more than 1,000 individuals and organizations. Transcripts of the oral testimony and question-and-answer (Q-and-A) sessions, as well as prepared statements submitted for the record and all other written submissions, form the basis for this compilation. Citations of these sources in this document use a system of identifying symbols explained below and in the accompanying box. The Report is organized into four general subject areas concerning: (1) efficiency in energy use, (2) the various forms of energy supply, (3) energy and the environment, and (4) the underlying foundations of science, education, and technology transfer. Each of these, in turn, is subdivided into sections addressing specific topics --- such as (in the case of energy efficiency) energy use in the transportation, residential, commercial, and industrial sectors, respectively. 416 refs., 44 figs., 5 tabs.

  19. An empirical study of FORTRAN programs for parallelizing compilers

    Science.gov (United States)

    Shen, Zhiyu; Li, Zhiyuan; Yew, Pen-Chung

    1990-01-01

    Some results are reported from an empirical study of program characteristics that are important in parallelizing compiler writers, especially in the area of data dependence analysis and program transformations. The state of the art in data dependence analysis and some parallel execution techniques are examined. The major findings are included. Many subscripts contain symbolic terms with unknown values. A few methods of determining their values at compile time are evaluated. Array references with coupled subscripts appear quite frequently; these subscripts must be handled simultaneously in a dependence test, rather than being handled separately as in current test algorithms. Nonzero coefficients of loop indexes in most subscripts are found to be simple: they are either 1 or -1. This allows an exact real-valued test to be as accurate as an exact integer-valued test for one-dimensional or two-dimensional arrays. Dependencies with uncertain distance are found to be rather common, and one of the main reasons is the frequent appearance of symbolic terms with unknown values.

  20. Proving Correctness for Pointer Programs in a Verifying Compiler

    Science.gov (United States)

    Kulczycki, Gregory; Singh, Amrinder

    2008-01-01

    This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.

  1. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  2. Interpretation, compilation and field verification procedures in the CARETS project

    Science.gov (United States)

    Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.

    1975-01-01

    The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.

  3. Memory management and compiler support for rapid recovery from failures in computer systems

    Science.gov (United States)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  4. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Science.gov (United States)

    2010-01-01

    ... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of records... contains investigatory material compiled for law enforcement purposes. (2) Provisions of the Privacy Act of...

  5. 49 CFR 801.57 - Records compiled for law enforcement purposes.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Records compiled for law enforcement purposes. 801... compiled for law enforcement purposes. Pursuant to 5 U.S.C. 552(b)(7), any records compiled for law or regulatory enforcement are exempt from public disclosure to the extent that disclosure would interfere with...

  6. Medical Surveillance Monthly Report

    Science.gov (United States)

    2016-07-01

    likelihood to be able to perform unrestricted duty. Author affiliations: Preventive Medicine Res- idency, Uniformed Services University of the Health...symptoms, health care visits, and absenteeism among Iraq War veterans. Am J Psychiatry. 2007;164(1):150–153. 20. Stein M, McAllister TW. Exploring the...demonstrated the increasing use and acceptance of these approaches in the gen- eral and military populations.6–8 For exam - ple, results of a 2012 survey

  7. Petroleum marketing monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    Petroleum Marketing Monthly (PPM) provides information and statistical data on a variety of crude oils and refined petroleum products. The publication presents statistics on crude oil costs and refined petroleum products sales for use by industry, government, private sector analysts, educational institutions, and consumers. Data on crude oil include the domestic first purchase price, the f.o. b. and landed cost of imported crude oil, and the refiners` acquisition cost of crude oil. Refined petroleum product sales data include motor gasoline, distillates, residuals, aviation fuels, kerosene, and propane. The Petroleum Marketing Division, Office of Oil and Gas, Energy Information Administration ensures the accuracy, quality, and confidentiality of the published data in the Petroleum Marketing Monthly.

  8. Petroleum marketing monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    The Petroleum Marketing Monthly (PMM) provides information and statistical data on a variety of crude oils and refined petroleum products. The publication presents statistics on crude oil costs and refined petroleum products sales for use by industry, government, private sector analysts, educational institutions, and consumers. Data on crude oil include the domestic first purchase price, the f.o.b. and landed cost of imported crude oil, and the refiners acquisition cost of crude oil. Refined petroleum product sales data include motor gasoline, distillates, residuals, aviation fuels, kerosene, and propane. The Petroleum Marketing Division, Office of Oil and Gas, Energy Information Administration ensures the accuracy, quality, and confidentiality of the published data in the Petroleum Marketing Monthly.

  9. Electric power monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The Energy Information Administration (EIA) prepares the Electric Power Monthly (EPM) for a wide audience including Congress, Federal and State agencies, the electric utility industry, and the general public. This publication provides monthly statistics for net generation, fossil fuel consumption and stocks, quantity and quality of fossil fuels, cost of fossil fuels, electricity sales, revenue, and average revenue per kilowatthour of electricity sold. Data on net generation, fuel consumption, fuel stocks, quantity and cost of fossil fuels are also displayed for the North American Electric Reliability Council (NERC) regions. The EIA publishes statistics in the EPM on net generation by energy source, consumption, stocks, quantity, quality, and cost of fossil fuels; and capability of new generating units by company and plant. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead.

  10. Survey on nutrition and health status in infants under 36 months in Zhen'an, Shaanxi.%陕西镇安县36月龄以下婴幼儿营养与健康状况调查

    Institute of Scientific and Technical Information of China (English)

    孟丽苹; 付萍; 张坚; 常峰; 王林江; 张发胜; 满青青; 宋鹏坤; 李丽祥

    2011-01-01

    [Objective] To analyse the nutrition and health status of infants under 36 months in Zhen'an county. Shaanxi province. [Method] By PPS sampling method, total 422 infants under 36 months in Zhen'an were chosen randomly and investigated using the questionnaire, anthropometries. Children aged 6-35 months were taken blood samples to measure hemoglobin and serum 25-(OH)D3 concentration. [Results] The prevalence of stunting, low weight and wasting were 8. 5%, 3. 6% and 3.1 %, respectively. The prevalence of diarrhea and the respiratory tract disease for the last two weeks were 22. 2% and 14. 5% , respectively. The prevalence of anemia, vitamin D deficiency and vitamin D borderline deficiency in 6-35 months children were 25. 1%, 15. 2% and 34. 8%. [Conclusions] The growth level of the infants in Zhen'an county is well, but the prevalence of anemia and vitamin D deficiency are high. It is necessary to take a nutrients supplement rich in iron and vitamin D to remedy the problem of anemia and vitamin D deficiency for infants aged 6 ~ 35 months.%[目的]分析陕西省镇安县36月龄以下婴幼儿营养与健康状况,为探讨适宜的儿童营养改善措施提供依据. [方法]采用成比例概率抽样方法,在陕西省镇安县随机抽取422名36月龄以下婴幼儿,进行问卷调查、体格测量,并对6~35月龄婴幼儿进行血红蛋白和血清维生素D( Vit D)测定. [结果]36月龄以下婴幼儿生长迟缓率为8.5%,低体重率为3.6%,消瘦率为3.1%;过去两周腹泻发病率22.2%,呼吸道疾病发病率14.5%.6~35月龄儿童贫血率为25.1%,Vit D缺乏率为15.2%,边缘缺乏率为34.8%. [结论]镇安地区婴幼儿体格发育尚可,但贫血和以VitD为代表的微量营养素缺乏较为严重.建议对该地区适龄儿童进行适当补充富含铁和Vit D的营养素补充剂,以有效的纠正该地区婴幼儿Vit D缺乏和贫血问题.

  11. A Further Compilation of Compressible Boundary Layer Data with a Survey of Turbulence Data,

    Science.gov (United States)

    1981-11-01

    the empty tunnel for a range of total pressures and tempe- ratures. This was used in conjunction with recovery factors suggested by Laurence and...little larger). In th.e regions of strong pressure gradient, the Preston tube and to a lesser deg~ ree profile-fit methods are inherently inaccurate. The...temperature relation, eqn.(2.5.37, AG 253) is good. The author’s published values of Ree are in error being out by a factor of 2.54. Comparisons may be made

  12. Mapping and prediction of schistosomiasis in Nigeria using compiled survey data and Bayesian geospatial modelling

    DEFF Research Database (Denmark)

    Ekpo, Uwem F.; Hürlimann, Eveline; Schur, Nadine

    2013-01-01

    Schistosomiasis prevalence data for Nigeria were extracted from peer-reviewed journals and reports, geo-referenced and collated in a nationwide geographical information system database for the generation of point prevalence maps. This exercise revealed that the disease is endemic in 35 of the cou......Schistosomiasis prevalence data for Nigeria were extracted from peer-reviewed journals and reports, geo-referenced and collated in a nationwide geographical information system database for the generation of point prevalence maps. This exercise revealed that the disease is endemic in 35......% confidence interval (CI): 22.8-23.1%). The model suggests that the mean temperature, annual precipitation and soil acidity significantly influence the spatial distribution. Prevalence estimates, adjusted for school-aged children in 2010, showed that the prevalence is...

  13. Petroleum marketing monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    The Petroleum Marketing Monthly (PMM) provides information and statistical data on a variety of crude oils and refined petroleum products. The publication presents statistics on crude oil costs and refined petroleum products sales for use by industry, government, private sector analysts, educational institutions, and consumers. Data on crude oil include the domestic first purchase price, the f.o.b. and landed cost of imported crude oil, and the refiners` acquisition cost of crude oil. Refined petroleum product sales data include motor gasoline, distillates, residuals, aviation fuels, kerosene, and propane. The Petroleum Marketing Division, Office of Oil and Gas, Energy Information Administration ensures the accuracy, quality, and confidentiality of the published data.

  14. abc the aspectBench compiler for aspectJ a workbench for aspect-oriented programming language and compilers research

    DEFF Research Database (Denmark)

    Allan, Chris; Avgustinov, Pavel; Christensen, Aske Simon

    2005-01-01

    Aspect-oriented programming (AOP) is gaining popularity as a new way of modularising cross-cutting concerns. The aspectbench compiler (abc) is a new workbench for AOP research which provides an extensible research framework for both new language features and new compiler optimisations. This poster...

  15. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Marques, D; Pingali, K; Rugina, R; McKee, S A

    2008-01-21

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, reduces checkpoint sizes by as much as 80% and enables asynchronous checkpointing.

  16. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Marques, D; Pingali, K; McKee, S; Rugina, R

    2009-02-18

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, significantly reduces checkpoint sizes and enables asynchronous checkpointing.

  17. Global Seismicity: Three New Maps Compiled with Geographic Information Systems

    Science.gov (United States)

    Lowman, Paul D., Jr.; Montgomery, Brian C.

    1996-01-01

    This paper presents three new maps of global seismicity compiled from NOAA digital data, covering the interval 1963-1998, with three different magnitude ranges (mb): greater than 3.5, less than 3.5, and all detectable magnitudes. A commercially available geographic information system (GIS) was used as the database manager. Epicenter locations were acquired from a CD-ROM supplied by the National Geophysical Data Center. A methodology is presented that can be followed by general users. The implications of the maps are discussed, including the limitations of conventional plate models, and the different tectonic behavior of continental vs. oceanic lithosphere. Several little-known areas of intraplate or passive margin seismicity are also discussed, possibly expressing horizontal compression generated by ridge push.

  18. Methodological challenges involved in compiling the Nahua pharmacopeia.

    Science.gov (United States)

    De Vos, Paula

    2017-06-01

    Recent work in the history of science has questioned the Eurocentric nature of the field and sought to include a more global approach that would serve to displace center-periphery models in favor of approaches that take seriously local knowledge production. Historians of Iberian colonial science have taken up this approach, which involves reliance on indigenous knowledge traditions of the Americas. These traditions present a number of challenges to modern researchers, including availability and reliability of source material, issues of translation and identification, and lack of systematization. This essay explores the challenges that emerged in the author's attempt to compile a pre-contact Nahua pharmacopeia, the reasons for these challenges, and the ways they may - or may not - be overcome.

  19. The Fault Tree Compiler (FTC): Program and mathematics

    Science.gov (United States)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  20. A Conformance Test Suite for Arden Syntax Compilers and Interpreters.

    Science.gov (United States)

    Wolf, Klaus-Hendrik; Klimek, Mike

    2016-01-01

    The Arden Syntax for Medical Logic Modules is a standardized and well-established programming language to represent medical knowledge. To test the compliance level of existing compilers and interpreters no public test suite exists. This paper presents the research to transform the specification into a set of unit tests, represented in JUnit. It further reports on the utilization of the test suite testing four different Arden Syntax processors. The presented and compared results reveal the status conformance of the tested processors. How test driven development of Arden Syntax processors can help increasing the compliance with the standard is described with two examples. In the end some considerations how an open source test suite can improve the development and distribution of the Arden Syntax are presented.

  1. Efficient topological compilation for a weakly integral anyonic model

    Science.gov (United States)

    Bocharov, Alex; Cui, Xingshan; Kliuchnikov, Vadym; Wang, Zhenghan

    2016-01-01

    A class of anyonic models for universal quantum computation based on weakly-integral anyons has been recently proposed. While universal set of gates cannot be obtained in this context by anyon braiding alone, designing a certain type of sector charge measurement provides universality. In this paper we develop a compilation algorithm to approximate arbitrary n -qutrit unitaries with asymptotically efficient circuits over the metaplectic anyon model. One flavor of our algorithm produces efficient circuits with upper complexity bound asymptotically in O (32 nlog1 /ɛ ) and entanglement cost that is exponential in n . Another flavor of the algorithm produces efficient circuits with upper complexity bound in O (n 32 nlog1 /ɛ ) and no additional entanglement cost.

  2. Reporting session of UWTF operation. Compilation of documents

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Kaoru; Togashi, Akio; Irinouchi, Shigenori [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (JP). Tokai Works] (and others)

    1999-07-01

    This is the compilation of the papers and OHP transparencies presented, as well as discussions and comments, on the occasion of UWTF reporting session. UWTF stands for The Second Uranium Waste Treatment Facility, which was constructed for compression of metallic wastes and used filters, which are parts of uranium bearing solid wastes generated from Tokai Works, Japan Nuclear Cycle Development Institute. UWTF has been processing wastes since June 4 1998. In the session, based on the one year experience of UWTF operation, the difficulties met and the suggestions to the waste sources are mainly discussed. A brief summary of the UWTF construction, description of waste treatment process, and operation report of fiscal year 1998 are attached. (A. Yamamoto)

  3. A compilation of charged-particle induced thermonuclear reaction rates

    CERN Document Server

    Angulo, C; Rayet, M; Descouvemont, P; Baye, D; Leclercq-Willain, C; Coc, A; Barhoumi, S; Aguer, P; Rolfs, C; Kunz, R; Hammer, J W; Mayer, A; Paradelis, T; Kossionides, S; Chronidou, C; Spyrou, K; Degl'Innocenti, S; Fiorentini, G; Ricci, B; Zavatarelli, S; Providência, C; Wolters, H; Soares, J; Grama, C; Rahighi, J; Shotter, A; Rachti, M L

    1999-01-01

    Low-energy cross section data for 86 charged-particle induced reactions involving light (1 <= Z <= 14), mostly stable, nuclei are compiled. The corresponding Maxwellian-averaged thermonuclear reaction rates of relevance in astrophysical plasmas at temperatures in the range from 10 sup 6 K to 10 sup 1 sup 0 K are calculated. These evaluations assume either that the target nuclei are in their ground state, or that the target states are thermally populated following a Maxwell-Boltzmann distribution, except in some cases involving isomeric states. Adopted values complemented with lower and upper limits of the rates are presented in tabular form. Analytical approximations to the adopted rates, as well as to the inverse/direct rate ratios, are provided.

  4. A compilation of charged-particle induced thermonuclear reaction rates

    Energy Technology Data Exchange (ETDEWEB)

    Angulo, C.; Arnould, M.; Rayet, M.; Descouvemont, P.; Baye, D.; Leclercq-Willain, C.; Coc, A.; Barhoumi, S.; Aguer, P.; Rolfs, C.; Kunz, R.; Hammer, J.W.; Mayer, A.; Paradellis, T.; Kossionides, S.; Chronidou, C.; Spyrou, K.; Degl' Innocenti, S.; Fiorentini, G.; Ricci, B.; Zavatarelli, S.; Providencia, C.; Wolters, H.; Soares, J.; Grama, C.; Rahighi, J.; Shotter, A.; Rachti, M. Lamehi

    1999-08-23

    Low-energy cross section data for 86 charged-particle induced reactions involving light (1 {<=} Z {<=} 14), mostly stable, nuclei are compiled. The corresponding Maxwellian-averaged thermonuclear reaction rates of relevance in astrophysical plasmas at temperatures in the range from 10{sup 6} K to 10{sup 10} K are calculated. These evaluations assume either that the target nuclei are in their ground state, or that the target states are thermally populated following a Maxwell-Boltzmann distribution, except in some cases involving isomeric states. Adopted values complemented with lower and upper limits of the rates are presented in tabular form. Analytical approximations to the adopted rates, as well as to the inverse/direct rate ratios, are provided.

  5. Compilation of Sandia coal char combustion data and kinetic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  6. abc: An Extensible AspectJ Compiler

    DEFF Research Database (Denmark)

    Avgustinov, Pavel; Christensen, Aske Simon; Hendren, Laurie J.

    2006-01-01

    Research in the design of aspect-oriented programming languages requires a workbench that facilitates easy experimentation with new language features and implementation techniques. In particular, new features for AspectJ have been proposed that require extensions in many dimensions: syntax, type...... checking and code generation, as well as data flow and control flow analyses. The AspectBench Compiler (abc) is an implementation of such a workbench. The base version of abc implements the full AspectJ language. Its front end is built using the Polyglot framework, as a modular extension of the Java...... language. The use of Polyglot gives flexibility of syntax and type checking. The back end is built using the Soot framework, to give modular code generation and analyses. In this paper, we outline the design of abc, focusing mostly on how the design supports extensibility. We then provide a general...

  7. COMPILATION OF LABORATORY SCALE ALUMINUM WASH AND LEACH REPORT RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    HARRINGTON SJ

    2011-01-06

    This report compiles and analyzes all known wash and caustic leach laboratory studies. As further data is produced, this report will be updated. Included are aluminum mineralogical analysis results as well as a summation of the wash and leach procedures and results. Of the 177 underground storage tanks at Hanford, information was only available for five individual double-shell tanks, forty-one individual single-shell tanks (e.g. thirty-nine 100 series and two 200 series tanks), and twelve grouped tank wastes. Seven of the individual single-shell tank studies provided data for the percent of aluminum removal as a function of time for various caustic concentrations and leaching temperatures. It was determined that in most cases increased leaching temperature, caustic concentration, and leaching time leads to increased dissolution of leachable aluminum solids.

  8. 3月龄以下婴儿迁延性咳嗽百日咳感染状况调查%Survey of pertussis infection in infants aged under 3 months with persistent cough

    Institute of Scientific and Technical Information of China (English)

    米荣; 伏瑾; 康利民; 崔晓岱; 王晓颖; 李莉; 徐放生

    2012-01-01

    Objective To explore the prevalence of pertussis in hospitalized infants aged under 3 months with persistent cough.Methods The nasopharyngeal secretions and serum samples were collected from hospitalized infants aged under 3 months with cough for over 2 weeks from January 2011 to January 2012.The samples of nasopharyngeal secretion were suctioned and collected.Multiplex PCR assay was employed to identify Bordetella pertussis( B.pertussis)and enzyme-linked immunosorbent assay used to detect antibody to pertussis toxin (PT-IgG).Total bacterial DNA was exacted from nasopharyngeal secretion and two-target IS481/PT of B.pertussis was detected by PCR.Results Fifty-nine infants (32 boys and 27 girls) were enrolled.None of them was pre-immunized with diphtheria-pertussis-tetanus vaccine.Seventeen infants (28.8% ) were B.Pertussis positive.Among 17 cases,3 infants under 1 month,4 infants 1 -2months,and 10 infants 2 -3 months.Three infants had household contacts with persistent cough and their serum antibodies to pertussis toxin were positive.Sixteen infants with pertussis had the paroxysms of frequent and rapid coughs while another 5 with pertussis had long inspiratory effort accompanied by a high-pitched "whoop" at the end of paroxysms.Seven infants with pertusis had conjunctiva bleeding,a special sign of pertussis.Ten infants had lymphocytosis with a predominant elevation of lymphocytes.Conclusions B.pertussis is an important pathogen for the infants under 3 months with persistent cough.Multiplex PCR may be used to identify B.pertussis with a high sensitivity.The unrecognized close family members of the infants with pertussis are probably an important source of infection.%目的 调查3月龄以下(≤3个月)的住院患儿中迁延性咳嗽(咳嗽2周以上)婴儿博德特百日咳杆菌感染状况.方法 对2011年1月至2012年1月首都儿科研究所附属儿童医院收治的3月龄以下迁延性咳嗽患儿采集血清和深部呼吸道分泌物,分别采用

  9. Construction experiences from underground works at Oskarshamn. Compilation report

    Energy Technology Data Exchange (ETDEWEB)

    Carlsson, Anders (Vattenfall Power Consultant AB, Stockholm (SE)); Christiansson, Rolf (Swedish Nuclear Fuel and Waste Management Co., Stockholm (SE))

    2007-12-15

    The main objective with this report is to compile experiences from the underground works carried out at Oskarshamn, primarily construction experiences from the tunnelling of the cooling water tunnels of the Oskarshamn nuclear power units 1,2 and 3, from the underground excavations of Clab 1 and 2 (Central Interim Storage Facility for Spent Nuclear Fuel), and Aespoe Hard Rock Laboratory. In addition, an account is given of the operational experience of Clab 1 and 2 and of the Aespoe HRL on primarily scaling and rock support solutions. This report, as being a compilation report, is in its substance based on earlier published material as presented in the list of references. Approximately 8,000 m of tunnels including three major rock caverns with a total volume of about 550,000 m3 have been excavated. The excavation works of the various tunnels and rock caverns were carried out during the period of 1966-2000. In addition, minor excavation works were carried out at the Aespoe HRL in 2003. The depth location of the underground structures varies from near surface down to 450 m. As an overall conclusion it may be said that the rock mass conditions in the area are well suited for underground construction. This conclusion is supported by the experiences from the rock excavation works in the Simpevarp and Aespoe area. These works have shown that no major problems occurred during the excavation works; nor have any stability or other rock engineering problems of significance been identified after the commissioning of the Oskarshamn nuclear power units O1, O2 and O3, BFA, Clab 1 and 2, and Aespoe Hard Rock Laboratory. The underground structures of these facilities were built according to plan, and since than been operated as planned. Thus, the quality of the rock mass within the construction area is such that it lends itself to excavation of large rock caverns with a minimum of rock support

  10. Southwest Indian Ocean Bathymetric Compilation (swIOBC)

    Science.gov (United States)

    Jensen, L.; Dorschel, B.; Arndt, J. E.; Jokat, W.

    2014-12-01

    As result of long-term scientific activities in the southwest Indian Ocean, an extensive amount of swath bathymetric data has accumulated in the AWI database. Using this data as a backbone, supplemented by additional bathymetric data sets and predicted bathymetry, we generate a comprehensive regional bathymetric data compilation for the southwest Indian Ocean. A high resolution bathymetric chart of this region will support geological and climate research: Identification of current-induced seabed structures will help modelling oceanic currents and, thus, provide proxy information about the paleo-climate. Analysis of the sediment distribution will contribute to reconstruct the erosional history of Eastern Africa. The aim of swIOBC is to produce a homogeneous and seamless bathymetric grid with an associated meta-database and a corresponding map for the area from 5° to 39° S and 20° to 44° E. Recently, multibeam data with a track length of approximately 86,000 km are held in-house. In combination with external echosounding data this allows for the generation of a regional grid, significantly improving the existing, mostly satellite altimetry derived, bathymetric models. The collected data sets are heterogeneous in terms of age, acquisition system, background data, resolution, accuracy, and documentation. As a consequence, the production of a bathymetric grid requires special techniques and algorithms, which were already developed for the IBCAO (Jakobsson et al., 2012) and further refined for the IBCSO (Arndt et al., 2013). The new regional southwest Indian Ocean chart will be created based on these methods. Arndt, J.E., et al., 2013. The International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0—A new bathymetric compilation covering circum-Antarctic waters. GRL 40, 1-7, doi: 10.1002/grl.50413, 2013. Jakobsson, M., et al., 2012. The International Bathymetric Chart of the Arctic Ocean (IBCAO) Version 3.0. GRL 39, L12609, doi: 10.1029/2012GL052219.

  11. Compilation and evaluation of a Paso del Norte emission inventory

    Energy Technology Data Exchange (ETDEWEB)

    Funk, T.H.; Chinkin, L.R.; Roberts, P.T. [Sonoma Technology, Inc., 1360 Redwood Way, Suite C, 94954-1169 Petaluma, CA (United States); Saeger, M.; Mulligan, S. [Pacific Environmental Services, 5001 S. Miami Blvd., Suite 300, 27709 Research Triangle Park, NC (United States); Paramo Figueroa, V.H. [Instituto Nacional de Ecologia, Avenue Revolucion 1425, Nivel 10, Col. Tlacopac San Angel, Delegacion Alvaro Obregon, C.P., 01040, D.F. Mexico (Mexico); Yarbrough, J. [US Environmental Protection Agency - Region 6, 1445 Ross Avenue, Suite 1200, 75202-2733 Dallas, TX (United States)

    2001-08-10

    Emission inventories of ozone precursors are routinely used as input to comprehensive photochemical air quality models. Photochemical model performance and the development of effective control strategies rely on the accuracy and representativeness of an underlying emission inventory. This paper describes the tasks undertaken to compile and evaluate an ozone precursor emission inventory for the El Paso/Ciudad Juarez/Southern Dona Ana region. Point, area and mobile source emission data were obtained from local government agencies and were spatially and temporally allocated to a gridded domain using region-specific demographic and land-cover information. The inventory was then processed using the US Environmental Protection Agency (EPA) recommended Emissions Preprocessor System 2.0 (UAM-EPS 2.0) which generates emissions files compatible with the Urban Airshed Model (UAM). A top-down evaluation of the emission inventory was performed to examine how well the inventory represented ambient pollutant compositions. The top-down evaluation methodology employed in this study compares emission inventory ratios of non-methane hydrocarbon (NMHC)/nitrogen oxide (NO{sub x}) and carbon monoxide (CO)/NO{sub x} ratios to corresponding ambient ratios. Detailed NMHC species comparisons were made in order to investigate the relative composition of individual hydrocarbon species in the emission inventory and in the ambient data. The emission inventory compiled during this effort has since been used to model ozone in the Paso del Norte airshed (Emery et al., CAMx modeling of ozone and carbon monoxide in the Paso del Norte airshed. In: Proc of Ninety-Third Annual Meeting of Air and Waste Management Association, 18-22 June 2000, Air and Waste Management Association, Pittsburgh, PA, 2000)

  12. Commissioners' Monthly Case Activity Report

    Data.gov (United States)

    Occupational Safety and Health Review Commission — Total cases pending at the beginning of the month, total cases added to the docket during the month, total cases disposed of during the month, and total cases...

  13. NOAA/NOS and USCGS Seabed Descriptions from Hydrographic Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA/NOS and USCGS Seabed Descriptions from Hydrographic Surveys database is a compilation of surficial sediment composition from multiple sources for over...

  14. Gulf of Mexico Marine Mammal Assessment Vessel Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data sets are a compilation of large vessel surveys for marine mammal stock assessments in the Gulf of Mexico from 1991 to the present. These are designed as...

  15. US Forest Service Forest Health Protection Insect and Disease Survey

    Data.gov (United States)

    US Forest Service, Department of Agriculture — This data is a compilation of forest insect, disease and abiotic damage mapped by aerial detection surveys on forested areas in the United States. US Forest Service,...

  16. Gulf of Mexico Protected Species Assessment Aerial Surveys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data sets include a compilation of aerial line-transect surveys conducted over continental shelf waters of the Gulf of Mexico since 1992. The majority of these...

  17. IT User Community Survey

    CERN Document Server

    Peter Jones (IT-CDA-WF)

    2016-01-01

    IT-CDA is gathering information to more accurately form a snapshot of the CERN IT user community and we would appreciate you taking time to complete the following survey.   We want to use this survey to better understand how the user community uses their devices and our services, and how the delivery of those services could be improved. You will need to authenticate to complete the survey. However please note that your responses are confidential and will be compiled together and analysed as a group. You can also volunteer to offer additional information if you so wish. This survey should take no longer than 5 minutes. Thanks in advance for your collaboration.

  18. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  19. Compilation of PRF Canyon Floor Pan Sample Analysis Results

    Energy Technology Data Exchange (ETDEWEB)

    Pool, Karl N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Minette, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wahl, Jon H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Greenwood, Lawrence R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Coffey, Deborah S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McNamara, Bruce K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bryan, Samuel A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Scheele, Randall D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sinkov, Sergey I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Soderquist, Chuck Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brown, Garrett N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Clark, Richard A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-06-30

    On September 28, 2015, debris collected from the PRF (236-Z) canyon floor, Pan J, was observed to exhibit chemical reaction. The material had been transferred from the floor pan to a collection tray inside the canyon the previous Friday. Work in the canyon was stopped to allow Industrial Hygiene to perform monitoring of the material reaction. Canyon floor debris that had been sealed out was sequestered at the facility, a recovery plan was developed, and drum inspections were initiated to verify no additional reactions had occurred. On October 13, in-process drums containing other Pan J material were inspected and showed some indication of chemical reaction, limited to discoloration and degradation of inner plastic bags. All Pan J material was sealed back into the canyon and returned to collection trays. Based on the high airborne levels in the canyon during physical debris removal, ETGS (Encapsulation Technology Glycerin Solution) was used as a fogging/lock-down agent. On October 15, subject matter experts confirmed a reaction had occurred between nitrates (both Plutonium Nitrate and Aluminum Nitrate Nonahydrate (ANN) are present) in the Pan J material and the ETGS fixative used to lower airborne radioactivity levels during debris removal. Management stopped the use of fogging/lock-down agents containing glycerin on bulk materials, declared a Management Concern, and initiated the Potential Inadequacy in the Safety Analysis determination process. Additional drum inspections and laboratory analysis of both reacted and unreacted material are planned. This report compiles the results of many different sample analyses conducted by the Pacific Northwest National Laboratory on samples collected from the Plutonium Reclamation Facility (PRF) floor pans by the CH2MHill’s Plateau Remediation Company (CHPRC). Revision 1 added Appendix G that reports the results of the Gas Generation Rate and methodology. The scope of analyses requested by CHPRC includes the determination of

  20. Time and death in compiled adab "biographies"

    Directory of Open Access Journals (Sweden)

    Kilpatrick, Hilary

    2004-12-01

    Full Text Available In mediaeval Arabic belles-lettres (adab, accounts of lives are usually made up of quite short reports akhbār. These akhbār. are arranged in different ways, one of which is chronological order, but the compilers of such accounts apparently accord relative insignificance to chronological order. This paper examines some "biographies" compiled by al-Ṣūlī and Abū 1-Faraj al-Iṣbahānī, showing that temporal progression can exist in a "biographical" presentation, either alone or more often combined with other ways of organising the material. It then focuses on the placing of subjects' deaths in life accounts and on how they are integrated with the rest of the material. In conclusion, I suggest that when temporal progression is absent in "biographical" presentations, this should be seen as reflecting a mediaeval Arabic approach to life writing which differs from modem expectations but has its own rationale.

    En el género árabe medieval del adab (bellas letras, los relatos de vidas están normalmente compuestos a base de pequeñas noticias (ajbār. Estos ajbār están organizados de distintas maneras, una de las cuales es el orden cronológico, aunque los compiladores de tales noticias dan aparentemente poca importancia a tal orden. Este artículo estudia algunas «biografías» recogidas por al-Ṣūlī y Abū 1-Faraŷ al-Iṣbahānī, y muestra que la progresión temporal puede existir en las presentaciones «biográficas», sola o junto con otras formas de organizar el material. En él se analiza también el lugar que las muertes de los personajes ocupan en el relato de sus vidas, y cómo se integran en el resto del material. En conclusión, propongo que si la progresión temporal está ausente en las presentaciones «biográficas», ello se debe a la manera característica que la literatura árabe medieval tiene de enfocar los relatos de vida, que posee su propia lógica, aunque difiera

  1. Petroleum supply monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    The Petroleum Supply Monthly (PSM) is one of a family of four publications produced by the Petroleum Supply Division within the Energy Information Administration (EIA) reflecting different levels of data timeliness and completeness. The other publications are the Weekly Petroleum Status Report (WPSR), the Winter Fuels Report, and the Petroleum Supply Annual (PSA). Data presented in the PSM describe the supply and disposition of petroleum products in the United States and major US geographic regions. The data series describe production, imports and exports, inter-Petroleum Administration for Defense (PAD) District movements, and inventories by the primary suppliers of petroleum products in the United States (50 States and the District of Columbia). The reporting universe includes those petroleum sectors in primary supply. Included are: petroleum refiners, motor gasoline blends, operators of natural gas processing plants and fractionators, inter-PAD transporters, importers, and major inventory holders of petroleum products and crude oil. When aggregated, the data reported by these sectors approximately represent the consumption of petroleum products in the United States.

  2. Petroleum Supply Monthly

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    The Petroleum Supply Monthly (PSM) is one of a family of four publications produced by the Petroleum Supply Division within the Energy Information Administration (EIA) reflecting different levels of data timeliness and completeness. The other publications are the Weekly Petroleum Status Report (WPSR), the Winter Fuels Report, and the Petroleum Supply Annual (PSA). Data presented in the PSM describe the supply and disposition of petroleum products in the United States and major U.S. geographic regions. The data series describe production, imports and exports, inter-Petroleum Administration for Defense (PAD) District movements, and inventories by the primary suppliers of petroleum products in the United States (50 States and the District of Columbia). The reporting universe includes those petroleum sectors in primary supply. Included are: petroleum refiners, motor gasoline blenders, operators of natural gas processing plants and fractionators, inter-PAD transporters, importers, and major inventory holders of petroleum products and crude oil. When aggregated, the data reported by these sectors approximately represent the consumption of petroleum products in the United States. Data presented in the PSM are divided into two sections: Summary Statistics and Detailed Statistics.

  3. COSMIC monthly progress report

    Science.gov (United States)

    1994-01-01

    Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of May 1994. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are summarized. Nine articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) WFI - Windowing System for Test and Simulation; (2) HZETRN - A Free Space Radiation Transport and Shielding Program; (3) COMGEN-BEM - Composite Model Generation-Boundary Element Method; (4) IDDS - Interactive Data Display System; (5) CET93/PC - Chemical Equilibrium with Transport Properties, 1993; (6) SDVIC - Sub-pixel Digital Video Image Correlation; (7) TRASYS - Thermal Radiation Analyzer System (HP9000 Series 700/800 Version without NASADIG); (8) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (VAX VMS Version); and (9) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (UNIX Version). Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.

  4. Data compilation and assessment for water resources in Pennsylvania state forest and park lands

    Science.gov (United States)

    Galeone, Daniel G.

    2011-01-01

    As a result of a cooperative study between the U.S. Geological Survey and the Pennsylvania Department of Conservation and Natural Resources (PaDCNR), available electronic data were compiled for Pennsylvania state lands (state forests and parks) to allow PaDCNR to initially determine if data exist to make an objective evaluation of water resources for specific basins. The data compiled included water-quantity and water-quality data and sample locations for benthic macroinvertebrates within state-owned lands (including a 100-meter buffer around each land parcel) in Pennsylvania. In addition, internet links or contacts for geographic information system coverages pertinent to water-resources studies also were compiled. Water-quantity and water-quality data primarily available through January 2007 were compiled and summarized for site types that included streams, lakes, ground-water wells, springs, and precipitation. Data were categorized relative to 35 watershed boundaries defined by the Pennsylvania Department of Environmental Protection for resource-management purposes. The primary sources of continuous water-quantity data for Pennsylvania state lands were the U.S. Geological Survey (USGS) and the National Weather Service (NWS). The USGS has streamflow data for 93 surface-water sites located in state lands; 38 of these sites have continuous-recording data available. As of January 2007, 22 of these 38 streamflow-gaging stations were active; the majority of active gaging stations have over 40 years of continuous record. The USGS database also contains continuous ground-water elevation data for 32 wells in Pennsylvania state lands, 18 of which were active as of January 2007. Sixty-eight active precipitation stations (primarily from the NWS network) are located in state lands. The four sources of available water-quality data for Pennsylvania state lands were the USGS, U.S. Environmental Protection Agency, Pennsylvania Department of Environmental Protection (PaDEP), and

  5. 0~24月龄婴幼儿家庭养育情况调查%A survey of family nurture environment for infants aged 0-24 months

    Institute of Scientific and Technical Information of China (English)

    李彩虹; 朱宗涵; 戴耀华

    2012-01-01

    Objective To investigate the status of family nurture environment and its influencing factors on infants aged 0 to 24 months, so as to provide references for family rearing. Methods Beijing, Changzhi, Huanggang, Suzhou and Nanning were selected as study areas. The structured questionnaires were accomplished by caregivers of infants aged 0-24 months, and the results were analyzed by using SPSS. Results Altogether 1 036 cases participated in the study. The rate of exclusive breastfeeding in study areas was low, and further efforts were needed to reinforce complementary feeding. The complementary feeding rates of fruits, vegetables, meet and fish were significantly higher in urban than those in rural area (χ2 value was 4. 366, 6. 562, 10. 812 and 20. 208, respectively, all P <0. 05 ). The rate of vaccination was high, and 89. 9% of infants had regular physical examination. Prevalence of respiratory tract infection ( RTI ) was the highest, and the awareness rates of fever and cough were higher than other symptoms. Parents knowledge on disease was related to previous health status. Of 1 036 infants, 377( 36. 4% ) had accident injuries. Diapers were still used in 66. 5% of infants, and 32. 7% used both cloth and disposable diapers. For skin cleaning, water was most frequently used ( 67. 5% ). The average sleep time of studied infants at night was 9. 7 hours. Conclusion Infants development is significantly associated with parents care and their nurture environment. Under family nurture environment, it is necessary to enhance scientific infant-rearing conception and skills, so that the early development of infants can be promoted.%目的 了解0~24月龄婴幼儿家庭养育情况及其影响因素,为家庭育儿提供参考.方法 选择北京、长治、黄冈、苏州和南宁为研究地点,对0~24月龄婴幼儿养育人进行问卷调查,并对结果进行分析.结果 参加本次调查的婴幼儿共1 036人.纯母乳喂养率较低,辅食添加状况有待改

  6. A compilation of structure functions in deep inelastic scattering

    CERN Document Server

    Gehrmann, T; Whalley, M R

    1999-01-01

    A compilation of all the available data on the unpolarized structure functions F/sub 2/ and xF/sub 3/, R=( sigma /sub L// sigma /sub T/), the virtual photon asymmetries A/sub 1/ and A/sub 2/ and the polarized structure functions g/sub 1/ and g/sub 2/, from deep inelastic lepton scattering off protons, deuterium and nuclei is presented. The relevant experiments at CERN, DESY, Fermilab and SLAC from 1991, the date of our earlier review, to the present day are covered. A brief general theoretical introduction is given followed by the data presented both in tabular and graphical form and, for the F/sub 2/ and XF/sub 3/ data, the predictions based on the MRST98 and CTEQ4 parton distribution functions are also displayed. All the data in this review, together with data on a wide variety of other reactions, can be found in and retrieved from the Durham-RAL HEP Databases on the World-Wide-Web (http://durpdg.dur.ac.uk/HEPDATA). (76 refs).

  7. Archive Compiles New Resource for Global Tropical Cyclone Research

    Science.gov (United States)

    Knapp, Kenneth R.; Kruk, Michael C.; Levinson, David H.; Gibney, Ethan J.

    2009-02-01

    The International Best Track Archive for Climate Stewardship (IBTrACS) compiles tropical cyclone best track data from 11 tropical cyclone forecast centers around the globe, producing a unified global best track data set (M. C. Kruk et al., A technique for merging global tropical cyclone best track data, submitted to Journal of Atmospheric and Oceanic Technology, 2008). Best track data (so called because the data generally refer to the best estimate of a storm's characteristics) include the position, maximum sustained winds, and minimum central pressure of a tropical cyclone at 6-hour intervals. Despite the significant impact of tropical cyclones on society and natural systems, there had been no central repository maintained for global best track data prior to the development of IBTrACS in 2008. The data set, which builds upon the efforts of the international tropical forecasting community, has become the most comprehensive global best track data set publicly available. IBTrACS was created by the U.S. National Oceanic and Atmospheric Administration's National Climatic Data Center (NOAA NCDC) under the auspices of the World Data Center for Meteorology.

  8. Cross-Compiler for Modeling Space-Flight Systems

    Science.gov (United States)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  9. Research at GANIL. A compilation 1996-1997

    Energy Technology Data Exchange (ETDEWEB)

    Balanzat, E.; Bex, M.; Galin, J.; Geswend, S. [eds.

    1998-12-01

    The present compilation gives an overview of experimental results obtained with the GANIL facility during the period 1996-1997. It includes nuclear physics activities as well as interdisciplinary research. The scientific domain presented here extends well beyond the traditional nuclear physics and includes atomic physics, condensed matter physics, nuclear astrophysics, radiation chemistry, radiobiology as well as applied physics. In the nuclear physics field, many new results have been obtained concerning nuclear structure as well as the dynamics of nuclear collisions and nuclear disassembly of complex systems. Results presented deal in particular with the problem of energy equilibration, timescales and the origin of multifragmentation. Nuclear structure studies using both stable and radioactive beams deal with halo systems, study of shell closures far from stability, the existence of nuclear molecules as well as measurements of fundamental data s half lives, nuclear masses, nuclear radii, quadrupole and magnetic moments. In addition to traditional fields of atomic and solid state physics, new themes such as radiation chemistry and radiobiology are progressively being tackled. (K.A.)

  10. A CONVERT compiler of REC for PDP-8

    CERN Document Server

    McIntosh, Harold V

    2011-01-01

    REC (REGULAR EXPRESSION COMPILER) is a programming language of simple structure developed originally for the PDP-8 computer of the Digital Equipment, Corporation, but readily adaptable to any other general purpose computer. It has been used extensively in teaching Algebra and Numerical Analysis in the Escuela Superior de F\\'isica y Matem\\'aticas of the Instituto Polit\\'ecnico Nacional. Moreover, the fact that the same control language, REC, is equally applicable and equally efficient over the whole range of computer facilities available to the students gives a very welcome coherence to the entire teaching program, including the course of Mathematical Logic which is devoted to the theoretical aspects of such matters. REC; derives its appeal from the fact that computers can be regarded reasonably well as Turing Machines. The REC notation is simply a manner of writing regular expression, somewhat more amenable to programming the Turing Machine which they control. If one does not wish to think so strictly in term...

  11. OMPC: an open-source MATLAB®-to-Python compiler

    Directory of Open Access Journals (Sweden)

    Peter Jurica

    2009-02-01

    Full Text Available Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we introduce an Open-source MATLAB®-to-Python Compiler (OMPC, a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules run independent of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com.

  12. OMPC: an Open-Source MATLAB-to-Python Compiler.

    Science.gov (United States)

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  13. OMPC: an Open-Source MATLAB®-to-Python Compiler

    Science.gov (United States)

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  14. Emetic and Electric Shock Alcohol Aversion Therapy: Six- and Twelve-Month Follow-Up.

    Science.gov (United States)

    Cannon, Dale S.; Baker, Timothy B.

    1981-01-01

    Follow-up data are presented for 6- and 12-months on male alcoholics (N=20) who received either a multifaceted inpatient alcoholism treatment program alone (controls) or emetic or shock aversion therapy in addition to that program. Both emetic and control subjects compiled more days of abstinence than shock subjects. (Author)

  15. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    Science.gov (United States)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  16. Compilation Techniques Specific for a Hardware Cryptography-Embedded Multimedia Mobile Processor

    Directory of Open Access Journals (Sweden)

    Masa-aki FUKASE

    2007-12-01

    Full Text Available The development of single chip VLSI processors is the key technology of ever growing pervasive computing to answer overall demands for usability, mobility, speed, security, etc. We have so far developed a hardware cryptography-embedded multimedia mobile processor architecture, HCgorilla. Since HCgorilla integrates a wide range of techniques from architectures to applications and languages, one-sided design approach is not always useful. HCgorilla needs more complicated strategy, that is, hardware/software (H/S codesign. Thus, we exploit the software support of HCgorilla composed of a Java interface and parallelizing compilers. They are assumed to be installed in servers in order to reduce the load and increase the performance of HCgorilla-embedded clients. Since compilers are the essence of software's responsibility, we focus in this article on our recent results about the design, specifications, and prototyping of parallelizing compilers for HCgorilla. The parallelizing compilers are composed of a multicore compiler and a LIW compiler. They are specified to abstract parallelism from executable serial codes or the Java interface output and output the codes executable in parallel by HCgorilla. The prototyping compilers are written in Java. The evaluation by using an arithmetic test program shows the reasonability of the prototyping compilers compared with hand compilers.

  17. Compilation Techniques Specific for a Hardware Cryptography-Embedded Multimedia Mobile Processor

    Directory of Open Access Journals (Sweden)

    Masa-aki FUKASE

    2007-12-01

    Full Text Available The development of single chip VLSI processors is the key technology of ever growing pervasive computing to answer overall demands for usability, mobility, speed, security, etc. We have so far developed a hardware cryptography-embedded multimedia mobile processor architecture, HCgorilla. Since HCgorilla integrates a wide range of techniques from architectures to applications and languages, one-sided design approach is not always useful. HCgorilla needs more complicated strategy, that is, hardware/software (H/S codesign. Thus, we exploit the software support of HCgorilla composed of a Java interface and parallelizing compilers. They are assumed to be installed in servers in order to reduce the load and increase the performance of HCgorilla-embedded clients. Since compilers are the essence of software's responsibility, we focus in this article on our recent results about the design, specifications, and prototyping of parallelizing compilers for HCgorilla. The parallelizing compilers are composed of a multicore compiler and a LIW compiler. They are specified to abstract parallelism from executable serial codes or the Java interface output and output the codes executable in parallel by HCgorilla. The prototyping compilers are written in Java. The evaluation by using an arithmetic test program shows the reasonability of the prototyping compilers compared with hand compilers.

  18. Potential Theory Surveys and Problems

    CERN Document Server

    Lukeš, Jaroslav; Netuka, Ivan; Veselý, Jiří

    1988-01-01

    The volume comprises eleven survey papers based on survey lectures delivered at the Conference in Prague in July 1987, which covered various facets of potential theory, including its applications in other areas. The survey papers deal with both classical and abstract potential theory and its relations to partial differential equations, stochastic processes and other branches such as numerical analysis and topology. A collection of problems from potential theory, compiled on the occasion of the conference, is included, with additional commentaries, in the second part of this volume.

  19. Advance in Compiler Infrastructures%编译基础设施研究进展初谈

    Institute of Scientific and Technical Information of China (English)

    戴桂兰; 田金兰; 张素琴; 蒋维杜

    2002-01-01

    Based on main components of the compiler infrastructure, the paper focuses on discussing some key techniques for compiler back ends, reviews the recent representative common compiler infrastructures and their used individual techniques ,and outlines some current issues of compiler back ends and development directions for the further researches.

  20. Compilation of Abstracts of Theses Submitted by Candidates for Degrees

    Science.gov (United States)

    1988-09-30

    Messes Afloat I MANAGEMENT (cont.) Page Blake, W.R. Fiscal Constraints and 283 CDR, USN the P-3 Flight Hour Budget Bodzin , M.B. A Literature Survey of...LITERATURE 3URVEY OF PRIVATE SECTOR METHODS OF DETERMINING PERSONAL FINANCIAL RESPONSIBILITY Martin Bradley Bodzin Lieutenant, United States Naval Reserve B.S

  1. Global compilation of coastline change at river mouths

    Science.gov (United States)

    Aadland, Tore; Helland-Hansen, William

    2016-04-01

    We are using Google Earth Engine to analyze Landsat images to create a global compilation of coastline change at river mouths in order to develop scaling relationships between catchment properties and shoreline behaviour. Our main motivation for doing this is to better understand the rates at which shallowing upward successions of deltaic successions are formed. We are also interested in getting an insight into the impact of climate change and human activity on modern shorelines. Google Earth Engine is a platform that offers simple selection of relevant data from an extensive catalog of geospatial data and the tools to analyse it efficiently. We have used Google Earth Engine to select and analyze temporally and geographically bounded sets of Landsat images covering modern deltas included in the Milliman and Farnsworth 2010 database. The part of the shoreline sampled for each delta has been manually defined. The areas depicted in these image sets have been classified as land or water by thresholding a calibrated Modified Normalized Water Index. By representing land and water as 1.0 and 0 respectively and averaging image sets of sufficient size we have generated rasters quantifying the probability of an area being classified as land. The calculated probabilities reflect variation in the shoreline position; in particular, it minimizes the impact of short term-variations produced by tides. The net change in the land area of deltas can be estimated by comparing how the probability changes between image sets spanning different time periods. We have estimated the land area change that occurred from 2000 to 2014 at more than 130 deltas with catchment areas ranging from 470 to 6300000 sqkm. Log-log plots of the land area change of these deltas against their respective catchment properties in the Milliman and Farnsworth 2010 database indicate that the rate of land area change correlates with catchment size and discharge. Useful interpretation of the data requires that we

  2. A compiled checklist of seaweeds of Sudanese Red Sea coast

    Institute of Scientific and Technical Information of China (English)

    NahidAbdel Rahim Osman; Sayadat Eltigany Mohammed

    2016-01-01

    Objective: To present an updated and compiled checklist of Sudanese seaweeds as an example for the region for conservational as well as developmental purposes. Methods: The checklist was developed based on both field investigations using line transect method at 4 sites along the Red Sea coast of Sudan and review of available studies done on Sudanese seaweeds. Results: In total 114 macroalgal names were recorded and were found to be distributed in 16 orders, 34 families, and 62 genera. The Rhodophyceae macroalgae contained 8 orders, 17 families, 32 genera and 47 species. The Phaeophyceae macroalgae composed of 4 orders, 5 families, 17 genera, and 28 species. The 39 species of the Chlorophyceae macroalgae belong to 2 classes, 4 orders, 12 families, and 14 genera. The present paper proposed the addition of 11 macroalgal taxa to be included in Sudan seaweeds species list. These include 3 red seaweed species, 1 brown seaweed species and 7 green seaweed species. Conclusions: This list is not yet inclusive and it only represents the macroalgal species common to the intertidal areas of Sudan Red Sea coast. Further investigation may reveal the presence of more species. While significant levels of diversity and endemism were revealed for other groups of organisms in the Red Sea region, similar work still has to be performed for seaweeds. Considering the impact of climate change on communities’ structure and composition and the growing risk of maritime transportation through the Red Sea particularly that may originate from oil tankers as well as that may emanate from oil exploration, baseline data on seaweeds are highly required for management purposes.

  3. Assessment of the current status of basic nuclear data compilations

    Energy Technology Data Exchange (ETDEWEB)

    Riemer, R.L.

    1992-12-31

    The Panel on Basic Nuclear Data Compilations believes that it is important to provide the user with an evaluated nuclear database of the highest quality, dependability, and currency. It is also important that the evaluated nuclear data are easily accessible to the user. In the past the panel concentrated its concern on the cycle time for the publication of A-chain evaluations. However, the panel now recognizes that publication cycle time is no longer the appropriate goal. Sometime in the future, publication of the evaluated A-chains will evolve from the present hard-copy Nuclear Data Sheets on library shelves to purely electronic publication, with the advent of universal access to terminals and the nuclear databases. Therefore, the literature cut-off date in the Evaluated Nuclear Structure Data File (ENSDF) is rapidly becoming the only important measure of the currency of an evaluated A-chain. Also, it has become exceedingly important to ensure that access to the databases is as user-friendly as possible and to enable electronic publication of the evaluated data files. Considerable progress has been made in these areas: use of the on-line systems has almost doubled in the past year, and there has been initial development of tools for electronic evaluation, publication, and dissemination. Currently, the nuclear data effort is in transition between the traditional and future methods of dissemination of the evaluated data. Also, many of the factors that adversely affect the publication cycle time simultaneously affect the currency of the evaluated nuclear database. Therefore, the panel continues to examine factors that can influence cycle time: the number of evaluators, the frequency with which an evaluation can be updated, the review of the evaluation, and the production of the evaluation, which currently exists as a hard-copy issue of Nuclear Data Sheets.

  4. Evaluation and compilation of fission product yields 1993

    Energy Technology Data Exchange (ETDEWEB)

    England, T.R.; Rider, B.F.

    1995-12-31

    This document is the latest in a series of compilations of fission yield data. Fission yield measurements reported in the open literature and calculated charge distributions have been used to produce a recommended set of yields for the fission products. The original data with reference sources, and the recommended yields axe presented in tabular form. These include many nuclides which fission by neutrons at several energies. These energies include thermal energies (T), fission spectrum energies (F), 14 meV High Energy (H or HE), and spontaneous fission (S), in six sets of ten each. Set A includes U235T, U235F, U235HE, U238F, U238HE, Pu239T, Pu239F, Pu241T, U233T, Th232F. Set B includes U233F, U233HE, U236F, Pu239H, Pu240F, Pu241F, Pu242F, Th232H, Np237F, Cf252S. Set C includes U234F, U237F, Pu240H, U234HE, U236HE, Pu238F, Am241F, Am243F, Np238F, Cm242F. Set D includes Th227T, Th229T, Pa231F, Am241T, Am241H, Am242MT, Cm245T, Cf249T, Cf251T, Es254T. Set E includes Cf250S, Cm244S, Cm248S, Es253S, Fm254S, Fm255T, Fm256S, Np237H, U232T, U238S. Set F includes Cm243T, Cm246S, Cm243F, Cm244F, Cm246F, Cm248F, Pu242H, Np237T, Pu240T, and Pu242T to complete fission product yield evaluations for 60 fissioning systems in all. This report also serves as the primary documentation for the second evaluation of yields in ENDF/B-VI released in 1993.

  5. Fifth Baltic Sea pollution load compilation (PLC-5)

    Energy Technology Data Exchange (ETDEWEB)

    Knuuttila, S.; Svendsen, L.M.; Staaf, H.; Kotilainen, P.; Boutrup, S.; Pyhala, M.; Durkin, M.

    2011-07-01

    This report includes the main results from the Fifth Pollution Load Compilation abbreviated PLC-5. It includes quantified annual waterborne total loads (from rivers, unmonitored and coastal areas as well as direct point and diffuse sources discharging directly to the Baltic Sea) from 1994 to 2008 to provide a basis for evaluating any decreasing (or increasing) trends in the total waterborne inputs to the Baltic Sea. Chapter 1 contains the objectives of PLC and the framework on classification of inputs and sources. Chapter 2 includes a short description of the Baltic Sea catchment area, while the methods for quantification and analysis together with quality assurance topics are briefly introduced in Chapter 3. More detailed information on methodologies is presented in the PLC-5 guidelines (HELCOM 2006). Chapter 4 reports the total inputs to the Baltic Sea of nutrients and selected heavy metals. Furthermore, the results of the quatification of discharges and losses of nitrogen and phosphorus from point and diffuse sources into inland surface waters within the Baltic Sea catchment area (source-oriented approach or gross loads) as well as the total load to the maritime area (load-oriented approarch or net loads) in 2006 are shown. Typically, results are presented by country and by main Baltic Sea sub-region. In Chapter 5, flow normalization is introduced and the results of trend analyses on 1994-2008 time series data on total waterborne loads of nitrogen and phosphorus are given together with a first evaluation of progress in obtaining the provisional reduction targets by country and by main Baltic Sea sub-region. Chapter 6 includes discussion of some of the main conclusions and advice for future PLCs. The annexes contain the flow-normalized annual load data and figures and tables with results from the PLC-5.

  6. Compiling a Comprehensive EVA Training Dataset for NASA Astronauts

    Science.gov (United States)

    Laughlin, M. S.; Murray, J. D.; Lee, L. R.; Wear, M. L.; Van Baalen, M.

    2016-01-01

    Training for a spacewalk or extravehicular activity (EVA) is considered a hazardous duty for NASA astronauts. This places astronauts at risk for decompression sickness as well as various musculoskeletal disorders from working in the spacesuit. As a result, the operational and research communities over the years have requested access to EVA training data to supplement their studies. The purpose of this paper is to document the comprehensive EVA training data set that was compiled from multiple sources by the Lifetime Surveillance of Astronaut Health (LSAH) epidemiologists to investigate musculoskeletal injuries. The EVA training dataset does not contain any medical data, rather it only documents when EVA training was performed, by whom and other details about the session. The first activities practicing EVA maneuvers in water were performed at the Neutral Buoyancy Simulator (NBS) at the Marshall Spaceflight Center in Huntsville, Alabama. This facility opened in 1967 and was used for EVA training until the early Space Shuttle program days. Although several photographs show astronauts performing EVA training in the NBS, records detailing who performed the training and the frequency of training are unavailable. Paper training records were stored within the NBS after it was designated as a National Historic Landmark in 1985 and closed in 1997, but significant resources would be needed to identify and secure these records, and at this time LSAH has not pursued acquisition of these early training records. Training in the NBS decreased when the Johnson Space Center in Houston, Texas, opened the Weightless Environment Training Facility (WETF) in 1980. Early training records from the WETF consist of 11 hand-written dive logbooks compiled by individual workers that were digitized at the request of LSAH. The WETF was integral in the training for Space Shuttle EVAs until its closure in 1998. The Neutral Buoyancy Laboratory (NBL) at the Sonny Carter Training Facility near JSC

  7. Initial study - compilation and synthesis of knowledge about energy crops from field to energy production

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Magnus; Bubholz, Monika; Forsberg, Maya; Myringer, Aase; Palm, Ola; Roennbaeck, Marie; Tullin, Claes

    2007-11-15

    Energy crops constitute an yet not fully utilised potential as fuel for heating and power production. As competition for biomass increases interest in agricultural fuels such as straw, energy grain, willow, reed canary grass and hemp is increasing. Exploiting the potential for energy crops as fuels will demand that cultivation and harvest be coordinated with transportation, storage and combustion of the crops. Together, Vaermeforsk and the Swedish Farmers' Foundation for Agricultural Research (SLF), have taken the initiative to a common research programme. The long-term aim of the programme is to increase production and utilisation of bioenergy from agriculture to combustion for heat and power production in Sweden. The vision is that during the course of the 2006 - 2009 programme, decisive steps will be taken towards a functioning market for biofuels for bioenergy from agriculture. This survey has compiled and synthesised available knowledge and experiences about energy crops from field to energy production. The aim has been to provide a snapshot of knowledge today, to identify knowledge gaps and to synthesise knowledge we have today into future research needs. A research plan proposal has been developed for the research programme

  8. Electronic Services Monthly MI Report

    Data.gov (United States)

    Social Security Administration — This electronic services monthly MI report contains monthly MI data for most public facing online online applications such as iClaim, electronic access, Mobile wage...

  9. Your Baby's Growth: 3 Months

    Science.gov (United States)

    ... to Be Smart About Social Media Your Baby's Growth: 3 Months KidsHealth > For Parents > Your Baby's Growth: 3 Months Print A A A What's in ... months of life are a period of rapid growth. Your baby will gain about 1 to 1½ ...

  10. Monthly energy review, August 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-01

    The Monthly Energy Review for the month of August 1997, presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of U.S. production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors.

  11. Monthly progress report for April 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    Accomplishments for the month of April are described briefly for the following tasks: energy production research; fuels research; and supplemental government program. Energy production research includes: reservoir assessment and characterization; TORIS research support; development of improved microbial flooding methods; development of improved chemical flooding methods; development of improved alkaline flooding methods; mobility control and sweep improvement in chemical flooding; gas flood performance prediction improvement; mobility control, profile modification, and sweep improvement in gas flooding; three-phase relative permeability research; thermal processes for light oil recovery; thermal processes for heavy oil recovery; and imaging techniques applied to the study of fluids in porous media. Fuel research includes: development of analytical methodology for analysis of heavy crudes; and thermochemistry and thermophysical properties of organic nitrogen- and diheteratom-containing compounds. Supplemental government program includes: microbial-enhanced waterflooding field project; feasibility study of heavy oil recovery in the midcontinent region--Oklahoma, Kansas, and Missouri; surfactant-enhanced alkaline flooding field project; process- engineering property measurements on heavy petroleum components; development and application of petroleum production technologies; upgrade BPO crude oil data base; simulation analysis of steam-foam projects; DOE education initiative project; field application of foams for oil production symposium; technology transfer to independent producers; compilations and analysis of outcrop data from the Muddy and Almond Formations; implementation of oil and gas technology transfer initiative; and horizontal well production from fractured reservoirs.

  12. Monthly progress report for April 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    Accomplishments for the month of April are described briefly for the following tasks: energy production research; fuels research; and supplemental government program. Energy production research includes: reservoir assessment and characterization; TORIS research support; development of improved microbial flooding methods; development of improved chemical flooding methods; development of improved alkaline flooding methods; mobility control and sweep improvement in chemical flooding; gas flood performance prediction improvement; mobility control, profile modification, and sweep improvement in gas flooding; three-phase relative permeability research; thermal processes for light oil recovery; thermal processes for heavy oil recovery; and imaging techniques applied to the study of fluids in porous media. Fuel research includes: development of analytical methodology for analysis of heavy crudes; and thermochemistry and thermophysical properties of organic nitrogen- and diheteratom-containing compounds. Supplemental government program includes: microbial-enhanced waterflooding field project; feasibility study of heavy oil recovery in the midcontinent region--Oklahoma, Kansas, and Missouri; surfactant-enhanced alkaline flooding field project; process- engineering property measurements on heavy petroleum components; development and application of petroleum production technologies; upgrade BPO crude oil data base; simulation analysis of steam-foam projects; DOE education initiative project; field application of foams for oil production symposium; technology transfer to independent producers; compilations and analysis of outcrop data from the Muddy and Almond Formations; implementation of oil and gas technology transfer initiative; and horizontal well production from fractured reservoirs.

  13. 8 CFR 1208.12 - Reliance on information compiled by other sources.

    Science.gov (United States)

    2010-01-01

    ... Withholding of Removal § 1208.12 Reliance on information compiled by other sources. (a) In deciding an asylum... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Reliance on information compiled by other sources. 1208.12 Section 1208.12 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW...

  14. 8 CFR 240.69 - Reliance on information compiled by other sources.

    Science.gov (United States)

    2010-01-01

    ... Reliance on information compiled by other sources. In determining whether an applicant is eligible for... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Reliance on information compiled by other sources. 240.69 Section 240.69 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION...

  15. 8 CFR 1240.69 - Reliance on information compiled by other sources.

    Science.gov (United States)

    2010-01-01

    ... 203 of Pub. L. 105-100 § 1240.69 Reliance on information compiled by other sources. In determining... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Reliance on information compiled by other sources. 1240.69 Section 1240.69 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW...

  16. 8 CFR 208.12 - Reliance on information compiled by other sources.

    Science.gov (United States)

    2010-01-01

    ... Reliance on information compiled by other sources. (a) In deciding an asylum application, or in deciding... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Reliance on information compiled by other sources. 208.12 Section 208.12 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION...

  17. Methods for the Compilation of a Core List of Journals in Toxicology.

    Science.gov (United States)

    Kuch, T. D. C.

    Previously reported methods for the compilation of core lists of journals in multidisciplinary areas are first examined, with toxicology used as an example of such an area. Three approaches to the compilation of a core list of journals in toxicology were undertaken and the results analyzed with the aid of models. Analysis of the results of the…

  18. Regulatory and technical reports: (Abstract index journal). Compilation for first quarter 1997, January--March

    Energy Technology Data Exchange (ETDEWEB)

    Sheehan, M.A.

    1997-06-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors. This compilation is published quarterly and cummulated annually. Reports consist of staff-originated reports, NRC-sponsored conference reports, NRC contractor-prepared reports, and international agreement reports.

  19. Java编译程序技术与Java性能%Java Compiler Technology and Java Performance

    Institute of Scientific and Technical Information of China (English)

    冀振燕; 程虎

    2000-01-01

    This paper summarizes Java's compiler technology,and sorts all kinds of Java compilers into five categories:compilers with interpreter technology,compilers with JIT compiler technology,compilers with adaptive optimization technology,native compilers and translators.Their architectures and working principles are described and analyzed in detail.The authors also analyze the effect that compiler technology has on Java performance.%概述了Java编译程序技术,把Java编译程序分成5类:具有解释技术的编译程序;具有及时(JIT)编译技术的编译程序;具有自适应优化技术的编译程序;本地编译程序和翻译程序.详细描述和分析了它们的体系结构和工作原理.同时也分析了编译程序技术对Java性能的影响.

  20. Design Choices in a Compiler Course or How to Make Undergraduates Love Formal Notation

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff

    2008-01-01

    The undergraduate compiler course offers a unique opportunity to combine many aspects of the Computer Science curriculum. We discuss the many design choices that are available for the instructor and present the current compiler course at the University of Aarhus, the design of which displays...

  1. Regulatory and technical reports (abstract index journal). Compilation for third quarter 1997, July--September

    Energy Technology Data Exchange (ETDEWEB)

    Stevenson, L.L.

    1998-01-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the US Nuclear Regulatory Commission (NRC) Staff and its contractors. It is NRC`s intention to publish this compilation quarterly and to cumulate it annually. This report contains the third quarter 1997 abstracts.

  2. A compilation of global bio-optical in situ data for ocean-colour satellite applications

    NARCIS (Netherlands)

    Valente, A.; Wernand, M.R.

    2016-01-01

    A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-datarecords. Here we describe the data compiled for the validation of the ocean-colour products from the ESA OceanColour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY

  3. 36 CFR 902.57 - Investigatory files compiled for law enforcement purposes.

    Science.gov (United States)

    2010-07-01

    ... for law enforcement purposes. 902.57 Section 902.57 Parks, Forests, and Public Property PENNSYLVANIA... Records § 902.57 Investigatory files compiled for law enforcement purposes. (a) Files compiled by the Corporation for law enforcement purposes, including the enforcement of the regulations of the Corporation, are...

  4. And the Survey Says...

    Science.gov (United States)

    White, Susan C.

    2016-01-01

    Last month we highlighted our Quadrennial Survey of High School Physics Teachers. Using data from the survey, we have looked at the availability of high school physics. We report that about 95% of high school seniors attend a high school where physics is offered regularly--either every year or every other year. A U.S. Department of Education…

  5. Compilation and network analyses of cambrian food webs.

    Science.gov (United States)

    Dunne, Jennifer A; Williams, Richard J; Martinez, Neo D; Wood, Rachel A; Erwin, Douglas H

    2008-04-29

    A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid diversification of species, body

  6. Compilation and network analyses of cambrian food webs.

    Directory of Open Access Journals (Sweden)

    Jennifer A Dunne

    2008-04-01

    Full Text Available A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid

  7. A Compilation of MATLAB Scripts and Functions for MACGMC Analyses

    Science.gov (United States)

    Murthy, Pappu L. N.; Bednarcyk, Brett A.; Mital, Subodh K.

    2017-01-01

    The primary aim of the current effort is to provide scripts that automate many of the repetitive pre- and post-processing tasks associated with composite materials analyses using the Micromechanics Analysis Code with the Generalized Method of Cells. This document consists of a compilation of hundreds of scripts that were developed in MATLAB (The Mathworks, Inc., Natick, MA) programming language and consolidated into 16 MATLAB functions. (MACGMC). MACGMC is a composite material and laminate analysis software code developed at NASA Glenn Research Center. The software package has been built around the generalized method of cells (GMC) family of micromechanics theories. The computer code is developed with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated to increase the user friendliness, as well as to make it more robust in terms of input preparation and code execution. Finally, classical lamination theory has been implemented within the software, wherein GMC is used to model the composite material response of each ply. Thus, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. The pre-processing tasks include generation of a multitude of different repeating unit cells (RUCs) for CMCs and PMCs, visualization of RUCs from MACGMC input and output files and generation of the RUC section of a MACGMC input file. The post-processing tasks include visualization of the predicted composite response, such as local stress and strain contours, damage initiation and progression, stress-strain behavior, and fatigue response. In addition to the above, several miscellaneous scripts have been developed that can be used to perform repeated Monte-Carlo simulations to enable probabilistic

  8. And the Survey Says ...

    Science.gov (United States)

    White, Susan C.

    2013-01-01

    Two-Year Colleges, Physics Majors, and Diversity. As noted last month, we're taking a look at physics in two-year colleges (TYCs). We expect to have the first reports from our 2012-13 Nationwide Survey of High School Physics Teachers in the spring of 2014. Last month we noted that the high school physics experience of undergraduate physics…

  9. Compiler-assisted multiple instruction rollback recovery using a read buffer

    Science.gov (United States)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  10. The paradigm compiler: Mapping a functional language for the connection machine

    Science.gov (United States)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  11. Obtaining correct compile results by absorbing mismatches between data types representations

    Energy Technology Data Exchange (ETDEWEB)

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2016-10-04

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  12. Obtaining correct compile results by absorbing mismatches between data types representations

    Energy Technology Data Exchange (ETDEWEB)

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  13. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  14. The Construction of a Database to Support the Compilation of Japanese Learners’ Dictionaries

    Directory of Open Access Journals (Sweden)

    Yuriko SUNAKAWA

    2012-10-01

    Full Text Available The number of Japanese language learners outside Japan, especially of advanced level learners, is increasing yearly. From the intermediate level onwards, they could profit from bilingual Japanese learners’ dictionaries in their native language, but in most linguistic areas of the world only very simple dictionaries for beginners and for tourists are available. Our project therefore aims at supporting the compilation of Japanese language learners’ dictionaries for intermediate and advanced learners by building a database of contents needed when editing a Japanese language learners’ dictionary, and offering it online. This 4 year project is going to be running from 2011 to 2014. Two surveys were conducted: a survey of the vocabulary used in textbooks of Japanese as a foreign language and a quantitative survey on the targeted area of the Japanese language in a large-scale corpus, in order to select the list of words to be included in the database, and a general list of basic vocabulary for Japanese language instruction was created. At present, usage examples are being compiled on the basis of this vocabulary list, and a database system is being developed. A prototype of a database search interface and download system has been completed. The database is going to include various types of information which are considered to be useful for learners, such as grammar, phonetics, synonyms, collocations, stylistics, learners’ errors etc. These are presently being studied in detail to be made public in 2014.-----Število učencev in študentov japonskega jezika zunaj Japonske, posebej na višjih nivojih, narašča iz leta v leto. Od srednjega nivoja dalje so za učenje koristni dvojezični učni slovarji, ki vključujejo uporabnikov materni jezik, a za večino jezikov na svetu obstajajo le zelo preprosti slovarji za začetnike ali za turiste. Zato je cilj tega projekta sestaviti bazo podatkov, ki so potrebni v učnem slovarju japonščine, in jo

  15. The Construction of a Database to Support the Compilation of Japanese Learners’ Dictionaries

    Directory of Open Access Journals (Sweden)

    LEE, Jae-ho

    2012-10-01

    Full Text Available The number of Japanese language learners outside Japan, especially of advanced level learners, is increasing yearly. From the intermediate level onwards, they could profit from bilingual Japanese learners’ dictionaries in their native language, but in most linguistic areas of the world only very simple dictionaries for beginners and for tourists are available. Our project therefore aims at supporting the compilation of Japanese language learners’ dictionaries for intermediate and advanced learners by building a database of contents needed when editing a Japanese language learners’ dictionary, and offering it online. This 4 year project is going to be running from 2011 to 2014. Two surveys were conducted: a survey of the vocabulary used in textbooks of Japanese as a foreign language and a quantitative survey on the targeted area of the Japanese language in a large-scale corpus, in order to select the list of words to be included in the database, and a general list of basic vocabulary for Japanese language instruction was created. At present, usage examples are being compiled on the basis of this vocabulary list, and a database system is being developed. A prototype of a database search interface and download system has been completed. The database is going to include various types of information which are considered to be useful for learners, such as grammar, phonetics, synonyms, collocations, stylistics, learners’ errors etc. These are presently being studied in detail to be made public in 2014.-----Število učencev in študentov japonskega jezika zunaj Japonske, posebej na višjih nivojih, narašča iz leta v leto. Od srednjega nivoja dalje so za učenje koristni dvojezični učni slovarji, ki vključujejo uporabnikov materni jezik, a za večino jezikov na svetu obstajajo le zelo preprosti slovarji za začetnike ali za turiste. Zato je cilj tega projekta sestaviti bazo podatkov, ki so potrebni v učnem slovarju japonščine, in jo

  16. Compilation of Abstracts of Theses Submitted By Candidates for Degrees

    Science.gov (United States)

    1990-09-30

    AUTHOR(S) Students of the Naval Postgraduate School/Candidates for degrees, 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Year, Month, Day) T5 ...Ratios Leslie R. Elkin Corrosion Mechanisms and Behavior of a 180 Lieutenant, U.S. Navy P-130X GR/ 6063 Al Composite in Aqueous Environments Kent A...Lieutenant, U.S. Navy P-13OX Graphite Fiber Reinforced 6063 Aluminum Metal Matrix Composite Howard E. Koth The Effects of 1,2,3, and 4 Hz Imposed 186

  17. Set up the android system compiling server and configure the compiling environment%Android系统编译服务器搭设及环境配置

    Institute of Scientific and Technical Information of China (English)

    刘志锋

    2016-01-01

    This paper make a research about how to set up the android system compiling server and how to configure its environment, in order to fit on the big data system compiling need.it relates to the Ubuntu server set up,the SVN server set up,the compiling libraries configure and how to have a share space between Ubuntu and windows system.%本文主要论述一种Android系统编译服务器架设及环境配置方案,满足大数据系统编译需求。主要涉及到Ubuntu服务器架设;SVN服务器架设;芯片厂家交叉编译库配置,Ubuntu与windows共享等。

  18. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tick, E. [comp.

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. The fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.

  19. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    Energy Technology Data Exchange (ETDEWEB)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is often complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.

  20. Natural Gas Monthly, October 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-11-10

    The (NGM) Natural Gas Monthly highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. This month`s feature articles are: US Production of Natural Gas from Tight Reservoirs: and Expanding Rule of Underground Storage.

  1. Natural gas monthly, May 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-05-25

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The featured articles for this month are: Opportunities with fuel cells, and revisions to monthly natural gas data.

  2. Monthly energy review, January 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-01

    This report presents an overview of recent monthly energy statistics. Major activities covered include production, consumption, trade, stocks, and prices for fossil fuels, electricity, and nuclear energy.

  3. Natural gas monthly, July 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The feature article this month is entitled ``Intricate puzzle of oil and gas reserves growth.`` A special report is included on revisions to monthly natural gas data. 6 figs., 24 tabs.

  4. Monthly Program Cost Report (MPCR)

    Data.gov (United States)

    Department of Veterans Affairs — The Monthly Program Cost Report (MPCR) replaces the Cost Distribution Report (CDR). The MPCR provides summary information about Veterans Affairs operational costs,...

  5. Mapping monthly rainfall erosivity in Europe.

    Science.gov (United States)

    Ballabio, Cristiano; Borrelli, Pasquale; Spinoni, Jonathan; Meusburger, Katrin; Michaelides, Silas; Beguería, Santiago; Klik, Andreas; Petan, Sašo; Janeček, Miloslav; Olsen, Preben; Aalto, Juha; Lakatos, Mónika; Rymszewicz, Anna; Dumitrescu, Alexandru; Tadić, Melita Perčec; Diodato, Nazzareno; Kostalova, Julia; Rousseva, Svetla; Banasik, Kazimierz; Alewell, Christine; Panagos, Panos

    2017-02-01

    Rainfall erosivity as a dynamic factor of soil loss by water erosion is modelled intra-annually for the first time at European scale. The development of Rainfall Erosivity Database at European Scale (REDES) and its 2015 update with the extension to monthly component allowed to develop monthly and seasonal R-factor maps and assess rainfall erosivity both spatially and temporally. During winter months, significant rainfall erosivity is present only in part of the Mediterranean countries. A sudden increase of erosivity occurs in major part of European Union (except Mediterranean basin, western part of Britain and Ireland) in May and the highest values are registered during summer months. Starting from September, R-factor has a decreasing trend. The mean rainfall erosivity in summer is almost 4 times higher (315MJmmha(-1)h(-1)) compared to winter (87MJmmha(-1)h(-1)). The Cubist model has been selected among various statistical models to perform the spatial interpolation due to its excellent performance, ability to model non-linearity and interpretability. The monthly prediction is an order more difficult than the annual one as it is limited by the number of covariates and, for consistency, the sum of all months has to be close to annual erosivity. The performance of the Cubist models proved to be generally high, resulting in R(2) values between 0.40 and 0.64 in cross-validation. The obtained months show an increasing trend of erosivity occurring from winter to summer starting from western to Eastern Europe. The maps also show a clear delineation of areas with different erosivity seasonal patterns, whose spatial outline was evidenced by cluster analysis. The monthly erosivity maps can be used to develop composite indicators that map both intra-annual variability and concentration of erosive events. Consequently, spatio-temporal mapping of rainfall erosivity permits to identify the months and the areas with highest risk of soil loss where conservation measures should be

  6. Research on the Geo-spatial data compiling of the Circum-Pacific area

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jeong-Chan; Chun, Hee-Young; Kim, You-Bong [Korea Institute of Geology Mining and Materials, Taejon (KR)] (and others)

    1999-12-01

    This project focuses on the compilation and digitization of geo-spatial data such as lithostratigraphic lexicon and biostratigraphic data of major sedimentary basins and absolute geologic age data of major rock units including sedimentary, metamorphic and igneous rocks. During 1997, the first year of this project, the Pohang Basin (Tertiary) was chosen as target for compilation and digitization of lithostratigraphic lexicon and biostratigraphic data. A total of 32 lithostratigraphic lexicons and 36 biostratigraphic data were compiled. During 1998, the second year of this project, we compiled lithostratigraphic lexicon and biostratigraphic data of Mesozoic strata in two sedimentary basins: the Chungnam Coal field and Gyeongsang Basin. A total of 9 lithostratigraphic lexicons were compiled from the Chungnam Coal field, and 42 lithostratigraphic lexicons were compiled from the Gyeongsang Basin. Each lexicon includes name, rank, type locality or section, index, historical records, geographic distribution and references. Due to lack of fossils in both Mesozoic non-marine basins, the biostratigraphic data cannot be compiled according to bio zone. Rather, biostratigraphic data were collected in reference to each lithostratigraphic unit(i.e. formation). In this year, the third year of this project, the Joseon Supergroup(Cambrian-Ordovician) was chosen as target for compilation and digitization of lithostratigraphic lexicon and biostratigraphic data. A total of 51 lithostratigraphic lexicons and 33 biostratigraphic data were compiled. Data acquired from this project were exchanged with compatible data from CCOP member countries (East Asia), resulting in the production of WGGC CD-ROM (GEOLOGICAL CORRELATION: Lexicon, Biostratigraphy, Geochronology Data Bases Version 5.2). (author). 18 refs.

  7. A compilation of information on the {sup 31}P(p,{alpha}){sup 28}Si reaction and properties of excited levels in the compound nucleus {sup 32}S

    Energy Technology Data Exchange (ETDEWEB)

    Miller, R.E.; Smith, D.L. [Argonne National Lab., IL (United States). Technology Development Div.

    1997-11-01

    This report documents a survey of the literature, and provides a compilation of data contained therein, for the {sup 31}P(p,{alpha}){sup 28}Si reaction. Attention is paid here to resonance states in the compound-nuclear system {sup 32}S formed by {sup 31}P + p, with emphasis on the alpha-particle decay channels, {sup 28}Si + {alpha} which populate specific levels in {sup 28}Si. The energy region near the proton separation energy for {sup 32}S is especially important in this context for applications in nuclear astrophysics. Properties of the excited states in {sup 28}Si are also considered. Summaries of all the located references are provided and numerical data contained in them are compiled in EXFOR format where applicable.

  8. Ada compiler evaluation on the Space Station Freedom Software Support Environment project

    Science.gov (United States)

    Badal, D. L.

    1989-01-01

    This paper describes the work in progress to select the Ada compilers for the Space Station Freedom Program (SSFP) Software Support Environment (SSE) project. The purpose of the SSE Ada compiler evaluation team is to establish the criteria, test suites, and benchmarks to be used for evaluating Ada compilers for the mainframes, workstations, and the realtime target for flight- and ground-based computers. The combined efforts and cooperation of the customer, subcontractors, vendors, academia and SIGAda groups made it possible to acquire the necessary background information, benchmarks, test suites, and criteria used.

  9. Compilation of Earthquakes from 1850-2007 within 200 miles of the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    N. Seth Carpenter

    2010-07-01

    An updated earthquake compilation was created for the years 1850 through 2007 within 200 miles of the Idaho National Laboratory. To generate this compilation, earthquake catalogs were collected from several contributing sources and searched for redundant events using the search criteria established for this effort. For all sets of duplicate events, a preferred event was selected, largely based on epicenter-network proximity. All unique magnitude information for each event was added to the preferred event records and these records were used to create the compilation referred to as “INL1850-2007”.

  10. Regulatory and technical reports (abstract index journal): Annual compilation for 1994. Volume 19, Number 4

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-03-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the US Nuclear Regulatory Commission (NRC) Staff and its contractors. It is NRC`s intention to publish this compilation quarterly and to cumulate it annually. The main citations and abstracts in this compilation are listed in NUREG number order. These precede the following indexes: secondary report number index, personal author index, subject index, NRC originating organization index (staff reports), NRC originating organization index (international agreements), NRC contract sponsor index (contractor reports), contractor index, international organization index, and licensed facility index. A detailed explanation of the entries precedes each index.

  11. Monthly energy review, November 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 61 tabs.

  12. Natural gas monthly, February 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-02-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. 6 figs., 28 tabs.

  13. Monthly energy review: April 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-04-01

    This monthly report presents an overview of energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. A section is also included on international energy. The feature paper which is included each month is entitled ``Energy equipment choices: Fuel costs and other determinants.`` 37 figs., 59 tabs.

  14. ULTRAPLATE 30 month management report

    DEFF Research Database (Denmark)

    Jensen, Jens Dahl

    2003-01-01

    In the period from month 24 to month 30 focus has been on the work-package 3 activities concerning optimisation of the newly developed ULTRAPLATE technology towards specific industrial applications. Three main application areas have been pursued: 1) High- speed plating of lead free solder contact...

  15. Your Baby's Growth: 5 Months

    Science.gov (United States)

    ... to Be Smart About Social Media Your Baby's Growth: 5 Months KidsHealth > For Parents > Your Baby's Growth: 5 Months Print A A A What's in ... your child's birth, the doctor has been recording growth in weight, length, and head size (circumference) during ...

  16. Monthly energy review, November 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 91 tabs.

  17. Natural gas monthly, November 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. 6 figs., 27 tabs.

  18. Natural gas monthly, January 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-02-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. 6 figs., 28 tabs.

  19. Monthly energy review, October 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-10-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 61 tabs.

  20. Monthly energy review, June 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 36 figs., 61 tabs.

  1. Monthly energy review, May 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-05-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 61 tabs.

  2. Monthly energy review, January 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-01-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 61 tabs.

  3. Monthly energy review, February 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-02-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 73 tabs.

  4. Monthly energy review, March 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs., 74 tabs.

  5. Natural gas monthly, December 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. 6 figs., 28 tabs.

  6. Left behind by Birth Month

    Science.gov (United States)

    Solli, Ingeborg Foldøy

    2017-01-01

    Utilizing comprehensive administrative data from Norway I investigate long-term birth month effects. I demonstrate that the oldest children in class have a substantially higher GPA than their younger peers. The birth month differences are larger for low-SES children. Furthermore, I find that the youngest children in class are lagging significantly…

  7. Monthly Energy Review, February 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-26

    This monthly publication presents an overview of EIA`s recent monthly energy statistics, covering the major activities of U.S. production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. Two brief descriptions (`energy plugs`) on two EIA publications are presented at the start.

  8. Haida Months of the Year.

    Science.gov (United States)

    Cogo, Robert

    Students are introduced to Haida vocabulary in this booklet which briefly describes the seasons and traditional seasonal activities of Southeastern Alaska Natives. The first section lists the months in English and Haida; e.g., January is "Taan Kungaay," or "Bear Hunting Month." The second section contains seasonal names in…

  9. Monthly energy review, November 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 75 tabs.

  10. Monthly energy review, July 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. 37 figs. 73 tabs.

  11. Savannah River Laboratory monthly report

    Energy Technology Data Exchange (ETDEWEB)

    1985-12-01

    Efforts in the area of nuclear reactors and scientific computations are reported, including: robotics; reactor irradiation of nonend-bonded target slugs; computer link with Los Alamos National Laboratory; L-reactor thermal mitigation; aging of carbon in SRP reactor airborne activity confinement systems; and reactor risk assessment for earthquakes. Activities in chemical processes and environmental technology are reported, including: solids formation in a plutonium product stream; revised safety analysis reporting for F and H-Canyon operations; organic carbon analysis of DWPF samples; applications of Fourier transform infrared spectrometry; water chemistry analyzer for SRP reactors; and study of a biological community in Par Pond. Defense waste and laboratory operations activities include: Pu-238 waste incinerator startup; experimental canister frit blaster; saltstone disposal area design; powder metallurgy core diameter measurement; and a new maintenance shop facility. Nuclear materials planning encompasses decontamination and decommissioning of SRP facilities and a comprehensive compilation of environmental and nuclear safety issues. (LEW)

  12. SCOR WG-131: The Legacy of in situ Iron Enrichment Experiments - Data Compilation and Modeling

    Science.gov (United States)

    Boyd, Philip W.; Bakker, Dorothee C. E.; Wg-131, Scor

    2010-05-01

    Working Group 131 (WG-131) of the Scientific Committee on Oceanic Research (SCOR) has these aims: 1) Data compilation. Assembling in a common open-access database the metadata and data of the in situ iron fertilization experiments, ranging from surface water and water column data on physical, chemical and biological parameters, to biogeochemical rate processes and incubation experiments 2) Modeling and data synthesis of specific aspects of two or more such experiments for various topics, such as physical mixing, phytoplankton productivity, overall ecosystem functioning, iron chemistry, carbon budgeting, nutrient uptake ratios, and combinations of these variables and processes. Over the last 24 months SCOR WG-131 participants, in particular Doug Mackie, have liased with Cyndy Chandler and Steve Gegg at the Biological and Chemical Oceanography Data Management Office (BCO-DMO; http://www.bco-dmo.org/home) to pull together a relational database that spans data from IronEx I in 1993 to SEEDS II in 2004. The BCO-DMO database currently contains data sets for IronEx I, IronEx II, SOIREE, SEEDS I, SEEDS II, SERIES, SOFeX-North and SOFeX-South. The BCO-DMO data base also has a link to the publicly available EisenEx data, which are stored at the World Data Center for Marine Environmental Sciences (WDC-MARE, http://www.wdc-mare.org/). We hope to launch the BCO-DMO database at a side meeting on the afternoon of Sunday 21 February 2010 before the Ocean Sciences Meeting in Portland. The relational BCO-DMO database permits intercomparisons of data, thus allowing for exciting and novel opportunities for data synthesis and modeling, from 1-dimensional simple biological models through to complex 3-dimensional models. Scientists interested in such work are invited to contact the chairs of WG-131.

  13. North Alaska petroleum analysis: the regional map compilation

    Science.gov (United States)

    Saltus, Richard W.; Bird, Kenneth J.

    2003-01-01

    The U.S. Geological Survey initiated an effort to model north Alaskan petroleum systems. The geographic and geologic basis for modeling systems is provided by a set of regional digital maps that allow evaluation of the widest possible extent of each system. Accordingly, we laid out a rectangular map grid 1300 km (800 miles) east-west and 600 km (375 miles) north-south. The resulting map area extends from the Yukon Territory of Canada on the east to the Russian-U.S. Chukchi Sea on the west and from the Brooks Range on the south to the Canada basin-Chukchi borderland on the north. Within this map region, we combined disparate types of publicly available data to produce structure contour maps. Data types range from seismic-based mapping as in the National Petroleum Reserve to well penetrations in areas of little or no seismic data where extrapolation was required. With these types of data, we produced structure contour maps on three horizons: top of pre-Mississippian (basement), top of Triassic (Ellesmerian sequence), and top of Neocomian (Beaufortian sequence). These horizons, when combined with present-day topography and bathymetry, provide the bounding structural/stratigraphic surfaces of the north Alaskan petroleum province that mark major defining moments of the region's geologic history and allow regional portrayal of preserved sediment accumulations.

  14. A Compilation of Researches on Media and Violence

    Directory of Open Access Journals (Sweden)

    Gülsüm ÇALIŞIR

    2015-12-01

    Full Text Available The fact violence which found place to itself in the regularity of daily life, despite it is an undesired act model, is the expression of a situation unfortunately stil having its validity in almost all communities developed or undeveloped. As that’s the fact, there seen a contradiction in the process we came. Because the fact violence ever stands in front of us a sample of uncivilization in our age that civilization rapidly improved. This situation, shows up as every kind of violence’s having patterns around us. In this study, to look at the projections of the studies conducted about primarily the subject of media and violence since the second half of 2000s until today in Turkey had ben aimed. For violence, it had been aimed to survey weightedly the reflections of violence tendency against women on media and the studies having media as subject. Through these aims, with what contains the studies having violence as subject were done in newspaper, radio, television that were the main information sources of people, and social media which was a popular comunication means of last days, had been mentioned and the studies reached had been introduced briefly. In this way, this study had been implemented as a literature review

  15. The application of compiler-assisted multiple instruction retry to VLIW architectures

    Science.gov (United States)

    Chen, Shyh-Kwei; Fuchs, W. K.; Hwu, Wen-Mei W.

    1994-01-01

    Very Long Instruction Word (VLIW) architectures enhance performance by exploiting fine-grained instruction level parallelism. We describe the development of two compiler assisted multiple instruction word retry schemes for VLIW architectures. The first scheme utilizes the compiler techniques previously developed for processors with single functional units. A compiler generated hazard-free code with different degrees of rollback capability for uniprocessors is compacted by a modified VLIW trace scheduling algorithm. Nops are then inserted in the scheduled code words to resolve data hazards for VLIW architectures. Performance is compared under three parameters: the rollback distance for uni-processors; the number of functional units; and the rollback distance for VLIW architectures. The second scheme employs a hardware read buffer to resolve frequently occurring data hazards, and utilizes the compiler to resolve the remaining hazards. Performance results are shown for six benchmark programs.

  16. Regulatory and technical reports (abstract index journal): Annual compilation for 1996, Volume 21, No. 4

    Energy Technology Data Exchange (ETDEWEB)

    Sheehan, M.A.

    1997-04-01

    This compilation is the annual cumulation of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors.

  17. Digital compilation bedrock geologic map of part of the Waitsfield quadrangle, Vermont

    Data.gov (United States)

    Vermont Center for Geographic Information — Digital Data from VG96-03�Digital compilation bedrock geologic map of part of the Waitsfield quadrangle, Vermont: VGS Open-File Report VG96-3A, 2 plates, scale...

  18. Supplementary material on passive solar heating concepts. A compilation of published articles

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-05-01

    A compilation of published articles and reports dealing with passive solar energy concepts for heating and cooling buildings is presented. The following are included: fundamental of passive systems, applications and technical analysis, graphic tools, and information sources. (MHR)

  19. Comment on "Atomic mass compilation 2012" by B. Pfeiffer, K. Venkataramaniah, U. Czok, C. Scheidenberger

    CERN Document Server

    Audi, Georges; Block, Michael; Bollen, Georg; Herfurth, Frank; Goriely, Stéphane; Hardy, John C; Kondev, Filip G; Kluge, Juergen H; Lunney, David; Pearson, Mike J; Savard, Guy; Sharma, Kumar; Wang, Meng; Zhang, Yuhu

    2014-01-01

    This "Comment" submitted to ADNDT on December 13, 2013 concerns a publication entitled "Atomic Mass Compilation 2012", which is due to appear in the March 2014 issue of the journal Atomic Data and Nuclear Data Tables (available online on September 6, 2013). We would like to make it clear that this paper is not endorsed by the Atomic Mass Evaluation (AME) international collaboration. The AME provides carefully recommended evaluated data, published periodically. The "Atomic Mass Compilation 2012" is not to be associated with the latest publication, AME2012, nor with any of the previously published mass evaluations that were developed under the leadership of Prof. A.H. Wapstra. We found the data presented in "Atomic Mass Compilation 2012" to be misleading and the approach implemented to be lacking in rigour since it does not allow to unambiguously trace the original published mass values. Furthermore, the method used in "Atomic Mass Compilation 2012" is not valid and leads to erroneous and contradictory outputs,...

  20. Constraining stellar population models - I. Age, metallicity, and abundance pattern compilation for Galactic globular clusters

    CERN Document Server

    Roediger, Joel C; Graves, Genevieve; Schiavon, Ricardo

    2013-01-01

    We present an extenstive literature compilation of age, metallicity, and chemical abundance pattern information for the 41 Galactic globular clusters (GGCs) studied by Schiavon et al. (2005). Our compilation constitutes a notable improvement over previous similar work, particularly in terms of chemical abundances. Its primary purpose is to enable detailed evaluations of and refinements to stellar population synthesis models designed to recover the above information for unresolved stellar systems based on their integrated spectra. However, since the Schiavon sample spans a wide range of the known GGC parameter space, our compilation may also benefit investigations related to a variety of astrophysical endeavours, such as the early formation of the Milky Way, the chemical evolution of GGCs, and stellar evolution and nucleosynthesis. For instance, we confirm with our compiled data that the GGC system has a bimodal metallicity distribution and is uniformly enhanced in the alpha-elements. When paired with the ages...

  1. Expert Programmer versus Parallelizing Compiler: A Comparative Study of Two Approaches for Distributed Shared Memory

    Directory of Open Access Journals (Sweden)

    M. F. P. O'Boyle

    1996-01-01

    Full Text Available This article critically examines current parallel programming practice and optimizing compiler development. The general strategies employed by compiler and programmer to optimize a Fortran program are described, and then illustrated for a specific case by applying them to a well-known scientific program, TRED2, using the KSR-1 as the target architecture. Extensive measurement is applied to the resulting versions of the program, which are compared with a version produced by a commercial optimizing compiler, KAP. The compiler strategy significantly outperforms KAP and does not fall far short of the performance achieved by the programmer. Following the experimental section each approach is critiqued by the other. Perceived flaws, advantages, and common ground are outlined, with an eye to improving both schemes.

  2. A comparison between cross-compiler and native development for mobile applications

    OpenAIRE

    Larsson, Thimmy; Wedin, Jonas

    2017-01-01

    Developing mobile applications for several platforms is a challenge for developers today. Supporting multiple applications with seperate code bases is expensive and time consuming. To solve this problem the technique Cross-Compiler is available for developers. This thesis investigates the performance and developer experience between native applications in Android and iOS and applications created with Cross-Compiler Xamarin. An application is defined and developed in order to test multiple har...

  3. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Directory of Open Access Journals (Sweden)

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  4. Using MaxCompiler for High Level Synthesis of Trigger Algorithms

    CERN Document Server

    Summers, Sioni Paris

    2016-01-01

    sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  5. Compilation of high energy physics reaction data: inventory of the particle data group holdings 1980

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.; Stevens, P.R.; Rittenberg, A.

    1980-12-01

    A compilation is presented of reaction data taken from experimental high energy physics journal articles, reports, preprints, theses, and other sources. Listings of all the data are given, and the data points are indexed by reaction and momentum, as well as by their source document. Much of the original compilation was done by others working in the field. The data presented also exist in the form of a computer-readable and searchable database; primitive access facilities for this database are available.

  6. Fifth Baltic Sea pollution load compilation (PLC-5). An executive summary

    Energy Technology Data Exchange (ETDEWEB)

    Svendsen, L.M.; Staaf, H.; Pyhala, M.; Kotilainen, P.; Bartnicki, J.; Knuuttila, S.; Durkin, M.

    2012-07-01

    This report summarizes and combines the main results of the Fifth Baltic Sea Pollution Load Compilation (HELCOM 2011) which covers waterborne loads to the sea and data on atmospheric loads which are submitted by countries to the co-operative programme for monitoring and evaluation of the long range transmission of air pollutants in Europe (EMEP), which subsequently compiles and reports this information to HELCOM.

  7. Natural gas monthly, May 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-05-01

    The Natural Gas Monthly highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The feature article this month is ``Restructuring energy industries: Lessons from natural gas.`` 6 figs., 26 tabs.

  8. Natural gas monthly, June 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-06-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The feature article this month is the executive summary from Natural Gas 1994: Issues and Trends. 6 figs., 31 tabs.

  9. Natural gas monthly, January 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-02-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The featured article for this month is on US coalbed methane production.

  10. Natural gas monthly, December 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    The Natural Gas Monthly highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. The article this month is entitled ``Recent Trends in Natural Gas Spot Prices.`` 6 figs., 27 tabs.

  11. EVALUATION OF VARIOUS COMPILER OPTIMIZATION TECHNIQUES RELATED TO MIBENCH BENCHMARK APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Jeyaraj Andrews

    2013-01-01

    Full Text Available Tuning compiler optimization for a given application of particular computer architecture is not an easy task, because modern computer architecture reaches higher levels of compiler optimization. These modern compilers usually provide a larger number of optimization techniques. By applying all these techniques to a given application degrade the program performance as well as more time consuming. The performance of the program measured by time and space depends on the machine architecture, problem domain and the settings of the compiler. The brute-force method of trying all possible combinations would be infeasible, as it’s complexity O(2n even for “n” on-off optimizations. Even though many existing techniques are available to search the space of compiler options to find optimal settings, most of those approaches can be expensive and time consuming. In this study, machine learning algorithm has been modified and used to reduce the complexity of selecting suitable compiler options for programs running on a specific hardware platform. This machine learning algorithm is compared with advanced combined elimination strategy to determine tuning time and normalized tuning time. The experiment is conducted on core i7 processor. These algorithms are tested with different mibench benchmark applications. It has been observed that performance achieved by a machine learning algorithm is better than advanced combined elimination strategy algorithm.

  12. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    Energy Technology Data Exchange (ETDEWEB)

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  13. A compiler and validator for flight operations on NASA space missions

    Science.gov (United States)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  14. US Monthly Pilot Balloon Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Monthly winds aloft summary forms summarizing Pilot Balloon observational data for the United States. Generally labeled as Form 1114, and then transitioning to Form...

  15. Monthly energy review, August 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-26

    This publication presents information for the month of August, 1993 on the following: Energy overview; energy consumption; petroleum; natural gas; oil and gas resource development; coal; electricity; nuclear energy; energy prices, and international energy.

  16. Monthly Energy Review, July 1992

    Energy Technology Data Exchange (ETDEWEB)

    None

    1992-07-27

    The Monthly Energy Review is prepared by the Energy Information Administration. Topics discussed include: Energy Overview, Energy Consumption, Petroleum, Natural Gas, Oil and Gas Resource Development, Coal, Electricity, Nuclear Energy, Energy Prices, International Energy. (VC)

  17. Monthly energy review, July 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-01

    This document presents an overview of recent monthly energy statistics. Activities covered include: U.S. production, consumption, trade, stock, and prices for petroleum, coal, natural gas, electricity, and nuclear energy.

  18. Monthly energy review, August 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    This report presents an overview of recent monthly energy statistics. The statistics cover the major activities of U.S. production, consumption, trade, stocks, and prices for petroleum, coal, natural gas, electricity, and nuclear energy.

  19. Monthly energy review, August 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-26

    This publication presents information for the month of August, 1993 on the following: Energy overview; energy consumption; petroleum; natural gas; oil and gas resource development; coal; electricity; nuclear energy; energy prices, and international energy.

  20. Your Child's Development: 15 Months

    Science.gov (United States)

    ... Child Too Busy? Helping Your Child Adjust to Preschool School Lunches Kids and Food: 10 Tips for Parents Healthy Habits for TV, Video Games, and the Internet Your Child's Development: 15 Months KidsHealth > For Parents > Your Child's Development: ...

  1. Your Child's Development: 6 Months

    Science.gov (United States)

    ... Child Too Busy? Helping Your Child Adjust to Preschool School Lunches Kids and Food: 10 Tips for Parents Healthy Habits for TV, Video Games, and the Internet Your Child's Development: 6 Months KidsHealth > For Parents > Your Child's Development: ...

  2. Your Child's Development: 2 Months

    Science.gov (United States)

    ... Child Too Busy? Helping Your Child Adjust to Preschool School Lunches Kids and Food: 10 Tips for Parents Healthy Habits for TV, Video Games, and the Internet Your Child's Development: 2 Months KidsHealth > For Parents > Your Child's Development: ...

  3. Monthly energy review, August 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. The MER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public. 37 figs., 73 tabs.

  4. Natural gas monthly, July 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 25 tabs.

  5. Natural gas monthly, June 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-06-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 25 tabs.

  6. Natural gas monthly, August 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-24

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information.

  7. Natural gas monthly, June 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 27 tabs.

  8. Natural gas monthly, April 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-26

    The National Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information.

  9. Natural gas monthly, September 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    The National Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 27 tabs.

  10. Natural gas monthly, June 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 24 tabs.

  11. Natural gas monthly, October 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-10-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 27 tabs.

  12. Monthly energy review, April 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-04-01

    This report presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of U.S. production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy data. A brief summary of the monthly and historical comparison data is provided in Section 1 of the report. A highlight section of the report provides an assessment of summer 1997 motor gasoline price increases.

  13. Monthly energy review, April 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-04-01

    The Monthly Energy Review (MER) presents an overview of the Energy Information Administration`s recent monthly energy statistics. The statistics cover the major activities of US production, consumption, trade, stocks, and prices for petroleum, natural gas, coal, electricity, and nuclear energy. Also included are international energy and thermal and metric conversion factors. The MER is intended for use by Members of Congress, Federal and State agencies, energy analysts, and the general public.

  14. Natural gas monthly, May 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-05-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 27 tabs.

  15. Natural gas monthly: December 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. Articles are included which are designed to assist readers in using and interpreting natural gas information.

  16. Natural gas monthly, July 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information. 6 figs., 25 tabs.

  17. Natural Gas Monthly, March 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-25

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information.

  18. Natural gas monthly, July 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-20

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information.

  19. Natural gas monthly, November 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-11-29

    The Natural Gas Monthly (NGM) highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground state data are also reported. From time to time, the NGM features articles designed to assist readers in using and interpreting natural gas information.

  20. Natural gas monthly, October 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-23

    The Natural Gas Monthly highlights activities, events, and analyses of interest to public and private sector organizations associated with the natural gas industry. Volume and price data are presented each month for natural gas production, distribution, consumption, and interstate pipeline activities. Producer-related activities and underground storage data are also reported. A glossary of the terms used in this report is provided to assist readers in understanding the data presented in this publication. 6 figs., 30 tabs.