WorldWideScience

Sample records for emission based analyses

  1. Quality assurance challenges in x-ray emission based analyses

    International Nuclear Information System (INIS)

    Papp, T.

    2005-01-01

    Complete text of publication follows. There is a large scatter in the results of X-ray analysis with solid-state detectors suggesting methodological origin. Although the PIXE (proton induced X-ray emission) analytical technique can work without relation to any physics, as was commented at the recent PIXE conference, one could argue that if the same technique is used for measuring physical quantities reveals problems, then perhaps potential methodological issues can not apriory be excluded. We present a simple example which could be interpreted as indications for methodological considerations. Recently an inter-comparison was made of analysis of the spectra measured at the laboratory of the International Atomic Energy Agency (IAEA). Four participating analytical software packages were used to evaluate the X-ray spectra. There are several thin metal samples spectra, for which common energy scale could not be established. The quality of the spectrum can be judged from the line shape. The line shape is parametrized by the full widths at half maximum (FWHM) of a peak and the so-called low energy tailing. Fitting the spectra individually we obtained FWHM squared values at different energies and determined the linear regression parameters. The parameters suggest a rather poor detector performance. It is generally assumed that the (FWHM) 2 values have a first order polynomial form as a function of X-ray energy. Having done a linear regression analysis, we can plot the standard residual, presented in Fig. 1, which clearly shows a three-sigma deviation. The probability to having a three-sigma deviation is 1%. In other words, the probability that these spectra are in accordance with the expected FWHM functional form is less than 1%. The main problem is that, although the composite spectra were analyzed using four different programs, the difficulty in interpreting the spectra was not commented upon by any of the participants in the inter-comparison. (author)

  2. Determination of radioactive emission origins based on analyses of isotopic composition

    International Nuclear Information System (INIS)

    Devell, L.

    1987-01-01

    The nature of radioactivity emissions can be determined through gamma spectroscopy of air samples with good precision, which means that the type of source of the emission may be found, e.g. nuclear weapons test, of nuclear power plant accident. Combined with information on wind trajectories it is normally possible to recognize time and area for the emission. In this preliminary study, the knowledge of and preparedness for such measurements are described. (L.E.)

  3. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  4. MASER: Measuring, Analysing, Simulating low frequency Radio Emissions.

    Science.gov (United States)

    Cecconi, B.; Le Sidaner, P.; Savalle, R.; Bonnin, X.; Zarka, P. M.; Louis, C.; Coffre, A.; Lamy, L.; Denis, L.; Griessmeier, J. M.; Faden, J.; Piker, C.; André, N.; Genot, V. N.; Erard, S.; King, T. A.; Mafi, J. N.; Sharlow, M.; Sky, J.; Demleitner, M.

    2017-12-01

    The MASER (Measuring, Analysing and Simulating Radio Emissions) project provides a comprehensive infrastructure dedicated to low frequency radio emissions (typically Radioastronomie de Nançay and the CDPP deep archive. These datasets include Cassini/RPWS, STEREO/Waves, WIND/Waves, Ulysses/URAP, ISEE3/SBH, Voyager/PRA, Nançay Decameter Array (Routine, NewRoutine, JunoN), RadioJove archive, swedish Viking mission, Interball/POLRAD... MASER also includes a Python software library for reading raw data.

  5. PC based uranium enrichment analyser

    International Nuclear Information System (INIS)

    Madan, V.K.; Gopalakrishana, K.R.; Bairi, B.R.

    1991-01-01

    It is important to measure enrichment of unirradiated nuclear fuel elements during production as a quality control measure. An IBM PC based system has recently been tested for enrichment measurements for Nuclear Fuel Complex (NFC), Hyderabad. As required by NFC, the system has ease of calibration. It is easy to switch the system from measuring enrichment of fuel elements to pellets and also automatically store the data and the results. The system uses an IBM PC plug in card to acquire data. The card incorporates programmable interval timers (8253-5). The counter/timer devices are executed by I/O mapped I/O's. A novel algorithm has been incorporated to make the system more reliable. The application software has been written in BASIC. (author). 9 refs., 1 fig

  6. A practical approach: in-situ continuous emission monitoring analysers

    Energy Technology Data Exchange (ETDEWEB)

    C.B. Daw; A.J. Bowers [Procal Analytics Ltd, Peterborough (United Kingdom)

    2004-07-01

    Advances in design and construction of stack-mounted analyzers has resulted in a large demand for this technology for continuous emission monitoring (CEM) of air pollutants from fossil-fuel power plants. The paper looks at some difficulties encountered in use of on-stack CEMs and how to overcome them. Examples are given of installations' use of in-situ CEMS systems at three coal-fired power plants; the Drax (UK), Powerton (United States) and TVA Paradise power station (United States). 12 figs., 1 tab.

  7. Potpourri of proton induced x-ray emission analyses

    International Nuclear Information System (INIS)

    Mangelson, N.F.; Nielson, K.K.; Eatough, D.J.; Hansen, L.D.

    1974-01-01

    A proton-induced x-ray emission analysis (PIXE) system using 2-MeV protons was developed. Measurements are being made in connection with several research projects. A study is being conducted to provide ecological baseline information in the region of the Navajo and the proposed Kaiparowits coal-fired electric generating stations. Trace-element measurements in this study are reported on air-particulate samples, small rodent tissues, soils, and plants. In another study air particulates collected near a source of SO 2 are extracted from the collection filter with an HCl solution and sulfate and sulfite ions are determined by calorimetric methods. The extraction solution is also analyzed by PIXE to determine the elemental composition. The latter information is necessary for an understanding of possible interferences with the calorimetric method and also indicates the heavy metals emitted by the source. Studies on human autopsy tissues, archeological artifacts, and in regular graduate and undergraduate laboratory classes are mentioned briefly

  8. Isotope analysis by emission spectroscopy; Analyse isotopique par spectroscopie d'emission

    Energy Technology Data Exchange (ETDEWEB)

    Artaud, J; Gerstenkorn, S [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Blaise, J [Centre National de la Recherche Scientifique (CNRS), Lab. Aime Cotton, 92 - Meudon-Bellevue (France)

    1959-07-01

    Quantitative analysis of isotope mixtures by emission spectroscopy is resulting from the phenomenon called 'isotope shift', say from the fact that spectral lines produced by a mixture of isotopes of a same element are complex. Every spectral line is, indeed, resulting from several lines respectively corresponding to each isotope. Then isotopic components are near one to others, and their separation is effected by means of Fabry-Perot calibration standard: the apparatus allowing to measure abundances is the Fabry-Perot photo-electric spectrometer, designed in 1948 by MM. JACQUINOT and DUFOUR. This method has been used to make abundance determination in the case of helium, lithium, lead and uranium. In the case of lithium, the utilised analysis line depends on the composition of examined isotopic mixture. For mixtures containing 7 to 93 pour cent of one of isotopes of lithium, this line is the lithium blue line: {lambda} = 4603 angstrom. In other cases the red line {lambda} = 6707 angstrom is preferable, though it allows to do easily nothing but relative determinations. Helium shows no particular difficulty and the analysis line selected was {lambda} = 6678 angstrom. For lead the line {lambda} = 5201 angstrom gives the possibility to determine the isotope abundance for the four isotopes of lead notwithstanding the presence of hyperfine structure of {sup 207}Pb. For uranium, line {lambda} 5027 angstrom is used, and this method allows to determine the composition of isotope mixtures, the content of which in {sup 235}U may shorten to 0,1 per cent. Relative precision is about 2 per cent for contents in {sup 235}U over 1 per cent. For lower contents, this line {lambda} = 5027 angstrom will allow relative measures when using previously dosed mixtures. (author) [French] L'analyse quantitative des melanges isotopiques par spectroscopie d'emission doit son existence au phenomene appele 'deplacement isotopique', c'est-a-dire au fait que les raies spectrales emises par un

  9. Australia’s Consumption-based Greenhouse Gas Emissions

    DEFF Research Database (Denmark)

    Levitt, Clinton J.; Saaby, Morten; Sørensen, Anders

    2017-01-01

    We use data from the World Input-Output Database in a multiregional input–output model to analyse Australian consumption-based greenhouse gas emissions for the years 1995 to 2009. We find that the emission content of Australian macroeconomic activity has changed over the 15-year period. Consumption...

  10. Programs in Fortran language for reporting the results of the analyses by ICP emission spectroscopy

    International Nuclear Information System (INIS)

    Roca, M.

    1985-01-01

    Three programs, written in FORTRAN IV language, for reporting the results of the analyses by ICP emission spectroscopy from data stored in files on floppy disks have been developed. They are intended, respectively, for the analyses of: 1) waters, 2) granites and slates, and 3) different kinds of geological materials. (Author) 8 refs

  11. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  12. Analysing the emission gap between pledged emission reductions under the Cancun Agreements and the 2C climate target

    Energy Technology Data Exchange (ETDEWEB)

    Den Elzen, M.G.J.; Roelfsema, M.; Hof, A.F. [Netherlands Environmental Assessment Agency PBL, Den Haag (Netherlands); Boettcher, H. [Institute for Applied Systems Analysis IIASA, Laxenburg (Austria); Grassi, G. [Joint Research Centre JRC, European Commission, Ispra (Italy)

    2012-04-15

    In the Cancun Agreements, Annex I Parties (industrialised countries) and non-Annex I Parties (developing countries) made voluntary pledges to reduce greenhouse gas emissions by 2020. The Cancun Agreements also state a long-term target of limiting temperature increase to a maximum of 2C above pre-industrial levels. This report is an update of the PBL report 'Evaluation of the Copenhagen Accord', which similar to earlier studies showed that there is a possible gap in emissions between the emission level resulting from the pledges and the level necessary to achieve the 2C target. The updates involve new information on many topics that have become available over the last two years, including updated national business-as-usual emission projections as provided by the countries themselves, and more information on uncertainties and on factors influencing the size of the emission gap. In this context, the main objective of this report can be formulated as follows: This report analyses the effect of the pledges put forward by the Parties in the Cancun Agreements on the emission gap, taking into account all the new information available. It pays specific attention to uncertainties and risks and describes in more detail the emission implications of the pledges and actions of the 12 largest emitting countries or regions.

  13. Analysing the emission gap between pledged emission reductions under the Cancun Agreements and the 2C climate target

    Energy Technology Data Exchange (ETDEWEB)

    Den Elzen, M. G.J.; Roelfsema, M.; Hof, A. F. [Netherlands Environmental Assessment Agency PBL, Den Haag (Netherlands); Boettcher, H. [Institute for Applied Systems Analysis IIASA, Laxenburg (Austria); Grassi, G. [Joint Research Centre JRC, European Commission, Ispra (Italy)

    2012-04-15

    In the Cancun Agreements, Annex I Parties (industrialised countries) and non-Annex I Parties (developing countries) made voluntary pledges to reduce greenhouse gas emissions by 2020. The Cancun Agreements also state a long-term target of limiting temperature increase to a maximum of 2C above pre-industrial levels. This report is an update of the PBL report 'Evaluation of the Copenhagen Accord', which similar to earlier studies showed that there is a possible gap in emissions between the emission level resulting from the pledges and the level necessary to achieve the 2C target. The updates involve new information on many topics that have become available over the last two years, including updated national business-as-usual emission projections as provided by the countries themselves, and more information on uncertainties and on factors influencing the size of the emission gap. In this context, the main objective of this report can be formulated as follows: This report analyses the effect of the pledges put forward by the Parties in the Cancun Agreements on the emission gap, taking into account all the new information available. It pays specific attention to uncertainties and risks and describes in more detail the emission implications of the pledges and actions of the 12 largest emitting countries or regions.

  14. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  15. Integrated well-to-wheel assessment on biofuels, analysing energy, emission and welfare economic consequences

    Energy Technology Data Exchange (ETDEWEB)

    Slentoe, E.; Moeller, F.; Frederiksen, P.; Jepsen, M.R.

    2011-07-15

    Various biofuel evaluation methods exist, with different analytical framework setup and different scopes. The scope of this study is to develop an integrated method to evaluate the consequences of producing biofuels. The consequences should include energy consumption, emission and welfare economic changes within the well-to-wheel (WTW) flow chain focusing on the production of biomass, and the subsequent conversion into bio fuel and combustion in vehicles. This method (Moeller and Slentoe, 2010) is applied to a Danish case, implementing policy targets for biofuel use in the transport sector and also developing an alternative scenario of higher biofuel shares. This paper presents the results of three interlinked parallel running analyses, of energy consumption, emissions and welfare economics (Slentoe, Moeller and Winther, 2010), and discusses the feasibility of those analyses, which are based on the same consequential analysis method, comparing a scenario situation to a reference situation. As will be shown, the results are not univocal; example given, what is an energy gain is not necessarily a welfare economic gain. The study is conducted as part of the Danish REBECa project. Within this, two main scenarios, HS1 and HS2, for biofuel mixture in fossil diesel fuel and gasoline are established. The biofuel rape diesel (RME) stems from rape seeds and bioethanol stems from either wheat grains (1st generation) or straw (2nd generation) - all cultivated in Denmark. The share of 2nd generation bioethanol exceeds 1st generation bioethanol towards 2030. Both scenarios initiate at a 5.75% mixture in 2010 and reach 10% and 25% in 2030 for HS1 and HS2, such that the low mixture scenario reflects the Danish Act on sustainable biofuels (June 2009), implementing the EU renewable energy directive (2009/29/EC), using biofuels as energy carrier. The two scenarios are computed in two variants each, reflecting oil prices at 65$ and 100$ per barrel. (Author)

  16. Comparative multivariate analyses of transient otoacoustic emissions and distorsion products in normal and impaired hearing.

    Science.gov (United States)

    Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel

    2015-01-01

    The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high

  17. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  18. Net Energy, CO2 Emission and Land-Based Cost-Benefit Analyses of Jatropha Biodiesel: A Case Study of the Panzhihua Region of Sichuan Province in China

    Directory of Open Access Journals (Sweden)

    Xiangzheng Deng

    2012-06-01

    Full Text Available Bioenergy is currently regarded as a renewable energy source with a high growth potential. Forest-based biodiesel, with the significant advantage of not competing with grain production on cultivated land, has been considered as a promising substitute for diesel fuel by many countries, including China. Consequently, extracting biodiesel from Jatropha curcas has become a growing industry. However, many key issues related to the development of this industry are still not fully resolved and the prospects for this industry are complicated. The aim of this paper is to evaluate the net energy, CO2 emission, and cost efficiency of Jatropha biodiesel as a substitute fuel in China to help resolve some of the key issues by studying data from this region of China that is well suited to growing Jatropha. Our results show that: (1 Jatropha biodiesel is preferable for global warming mitigation over diesel fuel in terms of the carbon sink during Jatropha tree growth. (2 The net energy yield of Jatropha biodiesel is much lower than that of fossil fuel, induced by the high energy consumption during Jatropha plantation establishment and the conversion from seed oil to diesel fuel step. Therefore, the energy efficiencies of the production of Jatropha and its conversion to biodiesel need to be improved. (3 Due to current low profit and high risk in the study area, farmers have little incentive to continue or increase Jatropha production. (4 It is necessary to provide more subsidies and preferential policies for Jatropha plantations if this industry is to grow. It is also necessary for local government to set realistic objectives and make rational plans to choose proper sites for Jatropha biodiesel development and the work reported here should assist that effort. Future research focused on breading high-yield varieties, development of efficient field

  19. Development of AIM for analysing policy options to reduce greenhouse gas emissions

    International Nuclear Information System (INIS)

    Kainuma, M.; Morita, T.; Matsuoka, Y.

    1999-01-01

    AIM (Asian-Pacific Integrated Model) has been developed for predicting greenhouse gas emissions and evaluating policy measures to reduce them. Two socio-economic scenarios were assumed and CO 2 emissions were predicted based on these scenarios and policy intervention assumptions. It is found that mitigating CO 2 emissions without scaling back productive activities or standards of living in Japan is possible. However, if one relies on the market mechanism alone, it cannot be done. The analysis has shown that it is essential to introduce new policies and measures such as carbon tax and subsidies. (author)

  20. Theoretical analyses of superconductivity in iron based ...

    African Journals Online (AJOL)

    This paper focuses on the theoretical analysis of superconductivity in iron based superconductor Ba1−xKxFe2As2. After reviewing the current findings on this system, we suggest that phononexciton combined mechanism gives a right order of superconducting transition temperature (TC) for Ba1−xKxFe2As2 . By developing ...

  1. Competition and stability analyses among emissions, energy, and economy: Application for Mexico

    International Nuclear Information System (INIS)

    Pao, Hsiao-Tien; Fu, Hsin-Chia

    2015-01-01

    In view of limited natural resources on Earth, linkage among environment, energy, and economy (3Es) becomes important perspectives for sustainable development. This paper proposes to use Lotka–Volterra model for SUstainable Development (LV-SUD) to analyse the interspecific interactions, equilibria and their stabilities among emissions, different types of energy consumption (renewable, nuclear, and fossil fuel), and real GDP, the main factors of 3Es issues. Modelling these interactions provides a useful multivariate framework for prediction outcomes. Interaction between 3Es, namely competition, symbiosis, or predation, plays an important role in policy development to achieve a balanced use of energy resources and to strengthen the green economy. Applying LV-SUD in Mexico, an emerging markets country, analysing results show that there is a mutualism between fossil fuel consumption and GDP; prey-predator relationships that fossil fuel and GDP enhance the growth of emissions, but emissions inhibit the growth of the others; and commensalisms that GDP benefits from nuclear power, and renewable power benefits from fossil fuel. It is suggested that national energy policies should remain committed to decoupling the relevance between non-clean energy and GDP, to actively developing clean energy and thereby to properly reducing fossil fuel consumption and emissions without harming economic growth. - Highlights: • LV-SUD is used to analyse the competition between environment-energy-economy (3Es). • The competitions between renewable, nuclear, and fossil energy are analysed. • Competition between 3Es plays an important role in policy development. • LV-SUD provides a useful multivariate framework for prediction outcomes. • An application for emerging markets countries such as Mexico is presented

  2. Emissions inventory and scenario analyses of air pollutants in Guangdong Province, China

    Science.gov (United States)

    Chen, Hui; Meng, Jing

    2017-03-01

    Air pollution, causing significantly adverse health impacts and severe environmental problems, has raised great concerns in China in the past few decades. Guangdong Province faces major challenges to address the regional air pollution problem due to the lack of an emissions inventory. To fill this gap, an emissions inventory of primary fine particles (PM2.5) is compiled for the year 2012, and the key precursors (sulfur dioxide, nitrogen oxides) are identified. Furthermore, policy packages are simulated during the period of 2012‒2030 to investigate the potential mitigation effect. The results show that in 2012, SO2, NO x , and PM2.5 emissions in Guangdong Province were as high as (951.7, 1363.6, and 294.9) kt, respectively. Industrial production processes are the largest source of SO2 and PM2.5 emissions, and transport is the top contributor of NO x emissions. Both the baseline scenario and policy scenario are constructed based on projected energy growth and policy designs. Under the baseline scenario, SO2, NO x , and PM2.5 emissions will almost double in 2030 without proper emissions control policies. The suggested policies are categorized into end-of- pipe control in power plants (ECP), end-of-pipe control in industrial processes (ECI), fuel improvement (FI), energy efficiency improvement (EEI), substitution-pattern development (SPD), and energy saving options (ESO). With the implementation of all these policies, SO2, NO x , and PM2.5 emissions are projected to drop to (303.1, 585.4, and 102.4) kt, respectively, in 2030. This inventory and simulated results will provide deeper insights for policy makers to understand the present situation and the evolution of key emissions in Guangdong Province.

  3. Multiparametric voxel-based analyses of standardized uptake values and apparent diffusion coefficients of soft-tissue tumours with a positron emission tomography/magnetic resonance system: Preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Sagiyama, Koji; Kamei, Ryotaro; Honda, Hiroshi [Kyushu University, Department of Clinical Radiology, Graduate School of Medical Sciences, Fukuoka (Japan); Watanabe, Yuji; Kawanami, Satoshi [Kyushu University, Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Fukuoka (Japan); Hong, Sungtak [Philips Electronics Japan, Healthcare, Tokyo (Japan); Matsumoto, Yoshihiro [Kyushu University, Departmant of Orthopaedic Surgery, Graduate School of Medical Sciences, Fukuoka (Japan)

    2017-12-15

    To investigate the usefulness of voxel-based analysis of standardized uptake values (SUVs) and apparent diffusion coefficients (ADCs) for evaluating soft-tissue tumour malignancy with a PET/MR system. Thirty-five subjects with either ten low/intermediate-grade tumours or 25 high-grade tumours were prospectively enrolled. Zoomed diffusion-weighted and fluorodeoxyglucose ({sup 18}FDG)-PET images were acquired along with fat-suppressed T2-weighted images (FST2WIs). Regions of interest (ROIs) were drawn on FST2WIs including the tumour in all slices. ROIs were pasted onto PET and ADC-maps to measure SUVs and ADCs within tumour ROIs. Tumour volume, SUVmax, ADCminimum, the heterogeneity and the correlation coefficients of SUV and ADC were recorded. The parameters of high- and low/intermediate-grade groups were compared, and receiver operating characteristic (ROC) analysis was also performed. The mean correlation coefficient for SUV and ADC in high-grade sarcomas was lower than that of low/intermediate-grade tumours (-0.41 ± 0.25 vs. -0.08 ± 0.34, P < 0.01). Other parameters did not differ significantly. ROC analysis demonstrated that correlation coefficient showed the best diagnostic performance for differentiating the two groups (AUC 0.79, sensitivity 96.0%, specificity 60%, accuracy 85.7%). SUV and ADC determined via PET/MR may be useful for differentiating between high-grade and low/intermediate-grade soft tissue tumours. (orig.)

  4. Identity-based estimation of greenhouse gas emissions from crop production

    DEFF Research Database (Denmark)

    Bennetzen, Eskild Hohlmann; Smith, Pete; Soussana, Jean-Francois

    2012-01-01

    reduction of emissions i.e. reducing emissions per unit of agricultural product rather than the absolute emissions per se. Hence the system productivity must be included in the same analysis. This paper presents the Kaya- Porter identity, derived from the Kaya identity, as a new way to calculate GHG...... (ha). These separate elements in the identity can be targeted in emissions reduction and mitigation policies and are useful to analyse past and current trends in emissions and to explore future scenarios. Using the Kaya-Porter identity we have performed a case study on Danish crop production and find...... emissions to have been reduced by 12% from 1992 to 2008, whilst yields per unit area have remained constant. Both land-based emissions and energy-based emissions have decreased, mainly due to a 41% reduction in nitrogen fertilizer use. The initial identity based analysis for crop production presented here...

  5. Consumption-based emission accounting for Chinese cities

    DEFF Research Database (Denmark)

    mi, zhifu; Zhang, Yunkun; Guan, Dabo

    2016-01-01

    Most of China’s CO2 emissions are related to energy consumption in its cities. Thus, cities are critical for implementing China’s carbon emissions mitigation policies. In this study, we employ an input-output model to calculate consumption-based CO2 emissions for thirteen Chinese cities and find......-based emissions exceed consumption-based emissions, whereas eight are consumption-based cities, with the opposite emissions pattern. Moreover, production-based cities tend to become consumption-based as they undergo socioeconomic development....

  6. In service monitoring based on fatigue analyses, possibilities and limitations

    International Nuclear Information System (INIS)

    Dittmar, S.; Binder, F.

    2004-01-01

    German LWR reactors are equipped with monitoring systems which are to enable a comparison of real transients with load case catalogues and fatigue catalogues for fatigue analyses. The information accuracy depends on the accuracy of measurements, on the consideration of parameters influencing fatigue (medium, component surface, component size, etc.), and on the accuracy of the load analyses. The contribution attempts a critical evaluation, also inview of the fact that real fatigue damage often are impossible to quantify on the basis of fatigue analyses at a later stage. The effects of the consideration or non-consideration of various influencing factors are discussed, as well as the consequences of the scatter of material characteristics on which the analyses are based. Possible measures to be taken in operational monitoring are derived. (orig.) [de

  7. A bromine-based dichroic X-ray polarization analyser

    CERN Document Server

    Collins, S P; Brown, S D; Thompson, P

    2001-01-01

    We have demonstrated the advantages offered by dichroic X-ray polarization filters for linear polarization analysis, and describe such a device, based on a dibromoalkane/urea inclusion compound. The polarizer has been successfully tested by analysing the polarization of magnetic diffraction from holmium.

  8. Temporal and spatial variation in recent vehicular emission inventories in China based on dynamic emission factors.

    Science.gov (United States)

    Cai, Hao; Xie, Shaodong

    2013-03-01

    The vehicular emission trend in China was tracked for the recent period 2006-2009 based on a database of dynamic emission factors of CO, nonmethane volatile organic compounds (NMVOC), NOx, PM10, CO2, CH4, and N2O for all categories of on-road motor vehicles in China, which was developed at the provincial level using the COPERT 4 model, to account for the effects of rapid advances in engine technologies, implementation of improved emission standards, emission deterioration due to mileage, and fuel quality improvement. Results show that growth rates of CO and NMVOC emissions slowed down, but NOx and PM10 emissions continued rising rapidly for the period 2006-2009. Moreover CO2, CH4, and N2O emissions in 2009 almost doubled compared to those in 2005. Characteristics of recent spatial distribution of emissions and emission contributions by vehicle category revealed that priority of vehicular emission control should be put on the eastern and southeastern coastal provinces and northern regions, and passenger cars and motorcycles require stricter control for the reduction of CO and NMVOC emissions, while effective reduction of NOx and PM10 emissions can be achieved by better control of heavy-duty vehicles, buses and coaches, and passenger cars. Explicit provincial-level Monte Carlo uncertainty analysis, which quantified for the first time the Chinese vehicular emission uncertainties associated with both COPERT-derived and domestically measured emission factors by vehicle technology, showed that CO, NMVOC, and NOx emissions for the period 2006-2009 were calculated with the least uncertainty, followed by PM10 and CO2, despite relatively larger uncertainties in N2O and CH4 emissions. The quantified low uncertainties of emissions revealed a necessity of applying vehicle technology- and vehicle age-specific dynamic emission factors for vehicular emission estimation, and these improved methodologies are applicable for routine update and forecast of China's on-road motor vehicle

  9. Applications of high lateral and energy resolution imaging XPS with a double hemispherical analyser based spectromicroscope

    International Nuclear Information System (INIS)

    Escher, M.; Winkler, K.; Renault, O.; Barrett, N.

    2010-01-01

    The design and applications of an instrument for imaging X-ray photoelectron spectroscopy (XPS) are reviewed. The instrument is based on a photoelectron microscope and a double hemispherical analyser whose symmetric configuration avoids the spherical aberration (α 2 -term) inherent for standard analysers. The analyser allows high transmission imaging without sacrificing the lateral and energy resolution of the instrument. The importance of high transmission, especially for highest resolution imaging XPS with monochromated laboratory X-ray sources, is outlined and the close interrelation of energy resolution, lateral resolution and analyser transmission is illustrated. Chemical imaging applications using a monochromatic laboratory Al Kα-source are shown, with a lateral resolution of 610 nm. Examples of measurements made using synchrotron and laboratory ultra-violet light show the broad field of applications from imaging of core level electrons with chemical shift identification, high resolution threshold photoelectron emission microscopy (PEEM), work function imaging and band structure imaging.

  10. Analyses of CO2 emissions embodied in Japan-China trade

    International Nuclear Information System (INIS)

    Liu Xianbing; Ishikawa, Masanobu; Wang Can; Dong Yanli; Liu Wenling

    2010-01-01

    This paper examines CO 2 emissions embodied in Japan-China trade. Besides directly quantifying the flow of CO 2 emissions between the two countries by using a traditional input-output (IO) model, this study also estimates the effect of bilateral trade to CO 2 emissions by scenario analysis. The time series of quantifications indicate that CO 2 emissions embodied in exported goods from Japan to China increased overall from 1990 to 2000. The exported CO 2 emissions from China to Japan greatly increased in the first half of the 1990s. However, by 2000, the amount of emissions had reduced from 1995 levels. Regardless, there was a net export of CO 2 emissions from China to Japan during 1990-2000. The scenario comparison shows that the bilateral trade has helped the reduction of CO 2 emissions. On average, the Chinese economy was confirmed to be much more carbon-intensive than Japan. The regression analysis shows a significant but not perfect correlation between the carbon intensities at the sector level of the two countries. In terms of CO 2 emission reduction opportunities, most sectors of Chinese industry could benefit from learning Japanese technologies that produce lower carbon intensities.

  11. Analyser-based phase contrast image reconstruction using geometrical optics

    International Nuclear Information System (INIS)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-01-01

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 μm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser

  12. Analyser-based phase contrast image reconstruction using geometrical optics.

    Science.gov (United States)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  13. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  14. The Establishment of LTO Emission Inventory of Civil Aviation Airports Based on Big Data

    Science.gov (United States)

    Lu, Chengwei; Liu, Hefan; Song, Danlin; Yang, Xinyue; Tan, Qinwen; Hu, Xiang; Kang, Xue

    2018-03-01

    An estimation model on LTO emissions of civil aviation airports was developed in this paper, LTO big data was acquired by analysing the internet with Python, while the LTO emissions was dynamically calculated based on daily LTO data, an uncertainty analysis was conducted with Monte Carlo method. Through the model, the emission of LTO in Shuangliu International Airport was calculated, and the characteristics and temporal distribution of LTO in 2015 was analysed. Results indicates that compared with the traditional methods, the model established can calculate the LTO emissions from different types of airplanes more accurately. Based on the hourly LTO information of 302 valid days, it was obtained that the total number of LTO cycles in Chengdu Shuangliu International Airport was 274,645 and the annual amount of emission of SO2, NOx, VOCs, CO, PM10 and PM2.5 was estimated, and the uncertainty of the model was around 7% to 10% varies on pollutants.

  15. Modeling of Control Costs, Emissions, and Control Retrofits for Cost Effectiveness and Feasibility Analyses

    Science.gov (United States)

    Learn about EPA’s use of the Integrated Planning Model (IPM) to develop estimates of SO2 and NOx emission control costs, projections of futureemissions, and projections of capacity of future control retrofits, assuming controls on EGUs.

  16. Greenhouse gas emission and exergy analyses of an integrated trigeneration system driven by a solid oxide fuel cell

    International Nuclear Information System (INIS)

    Chitsaz, Ata; Mahmoudi, S. Mohammad S.; Rosen, Marc A.

    2015-01-01

    Exergy and greenhouse gas emission analyses are performed for a novel trigeneration system driven by a solid oxide fuel cell (SOFC). The trigeneration system also consists of a generator-absorber heat exchanger (GAX) absorption refrigeration system and a heat exchanger to produce electrical energy, cooling and heating, respectively. Four operating cases are considered: electrical power generation, electrical power and cooling cogeneration, electrical power and heating cogeneration, and trigeneration. Attention is paid to numerous system and environmental performance parameters, namely, exergy efficiency, exergy destruction rate, and greenhouse gas emissions. A maximum enhancement of 46% is achieved in the exergy efficiency when the SOFC is used as the primary mover for the trigeneration system compared to the case when the SOFC is used as a standalone unit. The main sources of irreversibility are observed to be the air heat exchanger, the SOFC and the afterburner. The unit CO 2 emission (in kg/MWh) is considerably higher for the case in which only electrical power is generated. This parameter is reduced by half when the system is operates in a trigeneration mode. - Highlights: • A novel trigeneration system driven by a solid oxide fuel cell is analyzed. • Exergy and greenhouse gas emission analyses are performed. • Four special cases are considered. • An enhancement of up to 46% is achieved in exergy efficiency. • The CO 2 emission drops to a relatively low value for the tri-generation case

  17. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... marine sources using both conventional small-subunit (SSU) rRNA gene analyses and our quantitative method to calculate the proportion of genomes in each sample that are capable of a particular metabolic trait. With both environments, to determine what proportion of each community they make up and how......). These analyses demonstrate how genome proportionality compares to SSU rRNA gene relative abundance and how factors such as average genome size and SSU rRNA gene copy number affect sampling probability and therefore both types of community analysis....

  18. Process for carrying out analyses based on concurrent reactions

    Energy Technology Data Exchange (ETDEWEB)

    Glover, J S; Shepherd, B P

    1980-01-03

    The invention refers to a process for carrying out analyses based on concurrent reactions. A part of a compound to be analysed is subjected with a standard quantity of this compound in a labelled form to a common reaction with a standard quantity of a reagent, which must be less than the sum of the two parts of the reacting compound. The parts of the marked reaction compound and the labelled final compound resulting from the concurrence are separated in a tube (e.g. by centrifuging) after forced phase change (precipitation, absorption etc.) and the radio-activity of both phases in contact is measured separately. The shielded measuring device developed for this and suitable for centrifuge tubes of known dimensions is also included in the patent claims. The insulin concentration of a defined serum is measured as an example of the applications of the method (Radioimmunoassay).

  19. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  20. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  1. Simulation, prediction, and genetic analyses of daily methane emissions in dairy cattle.

    Science.gov (United States)

    Yin, T; Pinent, T; Brügemann, K; Simianer, H; König, S

    2015-08-01

    This study presents an approach combining phenotypes from novel traits, deterministic equations from cattle nutrition, and stochastic simulation techniques from animal breeding to generate test-day methane emissions (MEm) of dairy cows. Data included test-day production traits (milk yield, fat percentage, protein percentage, milk urea nitrogen), conformation traits (wither height, hip width, body condition score), female fertility traits (days open, calving interval, stillbirth), and health traits (clinical mastitis) from 961 first lactation Brown Swiss cows kept on 41 low-input farms in Switzerland. Test-day MEm were predicted based on the traits from the current data set and 2 deterministic prediction equations, resulting in the traits labeled MEm1 and MEm2. Stochastic simulations were used to assign individual concentrate intake in dependency of farm-type specifications (requirement when calculating MEm2). Genetic parameters for MEm1 and MEm2 were estimated using random regression models. Predicted MEm had moderate heritabilities over lactation and ranged from 0.15 to 0.37, with highest heritabilities around DIM 100. Genetic correlations between MEm1 and MEm2 ranged between 0.91 and 0.94. Antagonistic genetic correlations in the range from 0.70 to 0.92 were found for the associations between MEm2 and milk yield. Genetic correlations between MEm with days open and with calving interval increased from 0.10 at the beginning to 0.90 at the end of lactation. Genetic relationships between MEm2 and stillbirth were negative (0 to -0.24) from the beginning to the peak phase of lactation. Positive genetic relationships in the range from 0.02 to 0.49 were found between MEm2 with clinical mastitis. Interpretation of genetic (co)variance components should also consider the limitations when using data generated by prediction equations. Prediction functions only describe that part of MEm which is dependent on the factors and effects included in the function. With high

  2. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  3. Contribution of milk production to global greenhouse gas emissions. An estimation based on typical farms.

    Science.gov (United States)

    Hagemann, Martin; Ndambi, Asaah; Hemme, Torsten; Latacz-Lohmann, Uwe

    2012-02-01

    Studies on the contribution of milk production to global greenhouse gas (GHG) emissions are rare (FAO 2010) and often based on crude data which do not appropriately reflect the heterogeneity of farming systems. This article estimates GHG emissions from milk production in different dairy regions of the world based on a harmonised farm data and assesses the contribution of milk production to global GHG emissions. The methodology comprises three elements: (1) the International Farm Comparison Network (IFCN) concept of typical farms and the related globally standardised dairy model farms representing 45 dairy regions in 38 countries; (2) a partial life cycle assessment model for estimating GHG emissions of the typical dairy farms; and (3) standard regression analysis to estimate GHG emissions from milk production in countries for which no typical farms are available in the IFCN database. Across the 117 typical farms in the 38 countries analysed, the average emission rate is 1.50 kg CO(2) equivalents (CO(2)-eq.)/kg milk. The contribution of milk production to the global anthropogenic emissions is estimated at 1.3 Gt CO(2)-eq./year, accounting for 2.65% of total global anthropogenic emissions (49 Gt; IPCC, Synthesis Report for Policy Maker, Valencia, Spain, 2007). We emphasise that our estimates of the contribution of milk production to global GHG emissions are subject to uncertainty. Part of the uncertainty stems from the choice of the appropriate methods for estimating emissions at the level of the individual animal.

  4. Estimation of Methane Emissions from Municipal Solid Waste Landfills in China Based on Point Emission Sources

    Directory of Open Access Journals (Sweden)

    Cai Bo-Feng

    2014-01-01

    Citation: Cai, B.-F., Liu, J.-G., Gao, Q.-X., et al., 2014. Estimation of methane emissions from municipal solid waste landfills in China based on point emission sources. Adv. Clim. Change Res. 5(2, doi: 10.3724/SP.J.1248.2014.081.

  5. Analysing methodological choices in calculations of embodied energy and GHG emissions from buildings

    DEFF Research Database (Denmark)

    Rasmussen, Freja Nygaard; Malmqvist, Tove; Moncaster, Alice

    2018-01-01

    The importance of embodied energy and embodied greenhouse gas emissions (EEG) from buildings is gaining increased interest within building sector initiatives and on a regulatory level. In spite of recent har- monisation efforts, reported results of EEG from building case studies display large...... obtained, thus providing a framework for reinterpretation and more effective comparison. The collection of over 80 international case studies developed within the International Energy Agency’s EBC Annex 57 research programme is used as the quantitative foundation to present a comprehensive analysis......, and the combination potentials between these many parameters signifies a multitude of ways in which the outcome of EEG studies are affected....

  6. [Optical emission analyses of N2/TMG ECR plasma for deposition of GaN film].

    Science.gov (United States)

    Fu, Si-Lie; Wang, Chun-An; Chen, Jun-Fang

    2013-04-01

    The optical emission spectroscopy of hybrid N2/trimethylgallium (TMG) plasma in an ECR-PECVD system was investigated. The results indicate that the TMG gas is strongly dissociated into Ga*, CH and H even under self-heating condition. Ga species and nitrogen molecule in metastable state are dominant in hybrid ECR plasma. The concentration of metastable nitrogen molecule increases with the microwave power. On the other hand, the concentration of excited nitrogen molecules and of nitrogen ion decreases when the microwave power is higher than 400 W.

  7. Operational Satellite-based Surface Oil Analyses (Invited)

    Science.gov (United States)

    Streett, D.; Warren, C.

    2010-12-01

    During the Deepwater Horizon spill, NOAA imagery analysts in the Satellite Analysis Branch (SAB) issued more than 300 near-real-time satellite-based oil spill analyses. These analyses were used by the oil spill response community for planning, issuing surface oil trajectories and tasking assets (e.g., oil containment booms, skimmers, overflights). SAB analysts used both Synthetic Aperture Radar (SAR) and high resolution visible/near IR multispectral satellite imagery as well as a variety of ancillary datasets. Satellite imagery used included ENVISAT ASAR (ESA), TerraSAR-X (DLR), Cosmo-Skymed (ASI), ALOS (JAXA), Radarsat (MDA), ENVISAT MERIS (ESA), SPOT (SPOT Image Corp.), Aster (NASA), MODIS (NASA), and AVHRR (NOAA). Ancillary datasets included ocean current information, wind information, location of natural oil seeps and a variety of in situ oil observations. The analyses were available as jpegs, pdfs, shapefiles and through Google, KML files and also available on a variety of websites including Geoplatform and ERMA. From the very first analysis issued just 5 hours after the rig sank through the final analysis issued in August, the complete archive is still publicly available on the NOAA/NESDIS website http://www.ssd.noaa.gov/PS/MPS/deepwater.html SAB personnel also served as the Deepwater Horizon International Disaster Charter Project Manager (at the official request of the USGS). The Project Manager’s primary responsibility was to acquire and oversee the processing and dissemination of satellite data generously donated by numerous private companies and nations in support of the oil spill response including some of the imagery described above. SAB has begun to address a number of goals that will improve our routine oil spill response as well as help assure that we are ready for the next spill of national significance. We hope to (1) secure a steady, abundant and timely stream of suitable satellite imagery even in the absence of large-scale emergencies such as

  8. [The study of electroplex emission based on PVK/BCP].

    Science.gov (United States)

    Teng, Feng; Wang, Yuan-Min; Xu, Zheng; Wang, Yong-Sheng

    2005-05-01

    Electroplex emission based on poly(N-vinylcarbazole) (PVK) and 2,9-dimethyl-4,7-diphenyl-1,10-phenanthroline (BCP) has been studied. A emission peak at 595 nm was observed in EL spectrum but not in PL spectra in the devices. The emission originates from the transition between the excited state of BCP and the ground state of PVK. Because of the increase of emission zone, the device of PVK: BCP blend exhibited stronger electroplex emission. The emission of electronplex was enhanced for both of PVK/BCP double layer device and PVK:BCP blend device, and it is stronger for blend devices. At higher drive voltage, only electroplex emission was observed in the blend device.

  9. Application of proton-induced X-ray emission technique to gunshot residue analyses

    International Nuclear Information System (INIS)

    Sen, P.; Panigrahi, N.; Rao, M.S.; Varier, K.M.; Sen, S.; Mehta, G.K.

    1982-01-01

    The proton-induced X-ray emission (PIXE) technique was applied to the identification and analysis of gunshot residues. Studies were made of the type of bullet and bullet hole identification, firearm discharge element profiles, the effect of various target backings, and hand swabbings. The discussion of the results reviews the sensitivity of the PIXE technique, its nondestructive nature, and its role in determining the distance from the gun to the victim and identifying the type of bullet used and whether a wound was made by a bullet or not. The high sensitivity of the PIXE technique, which is able to analyze samples as small as 0.1 to 1 ng, and its usefulness for detecting a variety of elements should make it particularly useful in firearms residue investigations

  10. Conducting Meta-Analyses Based on p Values

    Science.gov (United States)

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  11. PC based 8K multichannel analyser for nuclear spectroscopy

    International Nuclear Information System (INIS)

    Jain, S.K.; Gupta, J.D.; Suman Kumari, B.

    1989-01-01

    An IBM-PC based 8K multichannel analyser(MCA) has been developed which incorporates all the features of an advanced system like very high throughput for data acquisition in PHA as well as MCS modes, fast real-time display, extensive display manipulation facilities, various present controls and concurrent data processing. The compact system hardware consists of a 2 bit wide NIM module and a PC add-on card. Because of external acquisition hardware, the system after initial programming by PC can acquire data independently allowing the PC to be switched off. To attain very high throughput, the most desirable feature of an MCA, a dual-port memory architecture has been used. The asymmetric dual-port RAM, housed in the NIM module offers 24 bit parallel access to the ADC and 8 bit wide access to PC which results in fast real-time histogramic display on the monitor. PC emulation software is menu driven and user friendly. It integrates a comprehensive set of commonly required application routines for concurrent data processing. After the transfer of know-how to the Electronic Corporation of India Ltd. (ECIL), this system is bein g produced at ECIL. (author). 5 refs., 4 figs

  12. Analyser-based x-ray imaging for biomedical research

    International Nuclear Information System (INIS)

    Suortti, Pekka; Keyriläinen, Jani; Thomlinson, William

    2013-01-01

    Analyser-based imaging (ABI) is one of the several phase-contrast x-ray imaging techniques being pursued at synchrotron radiation facilities. With advancements in compact source technology, there is a possibility that ABI will become a clinical imaging modality. This paper presents the history of ABI as it has developed from its laboratory source to synchrotron imaging. The fundamental physics of phase-contrast imaging is presented both in a general sense and specifically for ABI. The technology is dependent on the use of perfect crystal monochromator optics. The theory of the x-ray optics is developed and presented in a way that will allow optimization of the imaging for specific biomedical systems. The advancement of analytical algorithms to produce separate images of the sample absorption, refraction angle map and small-angle x-ray scattering is detailed. Several detailed applications to biomedical imaging are presented to illustrate the broad range of systems and body sites studied preclinically to date: breast, cartilage and bone, soft tissue and organs. Ultimately, the application of ABI in clinical imaging will depend partly on the availability of compact sources with sufficient x-ray intensity comparable with that of the current synchrotron environment. (paper)

  13. Progress Report on Computational Analyses of Water-Based NSTF

    Energy Technology Data Exchange (ETDEWEB)

    Lv, Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Kraus, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Bucknor, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Lisowski, D. [Argonne National Lab. (ANL), Argonne, IL (United States); Nunez, D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-01

    CFD analysis has been focused on important component-level phenomena using STARCCM+ to supplement the system analysis of integral system behavior. A notable area of interest was the cavity region. This area is of particular interest for CFD analysis due to the multi-dimensional flow and complex heat transfer (thermal radiation heat transfer and natural convection), which are not simulated directly by RELAP5. CFD simulations allow for the estimation of the boundary heat flux distribution along the riser tubes, which is needed in the RELAP5 simulations. The CFD results can also provide additional data to help establish what level of modeling detail is necessary in RELAP5. It was found that the flow profiles in the cavity region are simpler for the water-based concept than for the air-cooled concept. The local heat flux noticeably increases axially, and is higher in the fins than in the riser tubes. These results were utilized in RELAP5 simulations as boundary conditions, to provide better temperature predictions in the system level analyses. It was also determined that temperatures were higher in the fins than the riser tubes, but within design limits for thermal stresses. Higher temperature predictions were identified in the edge fins, in part due to additional thermal radiation from the side cavity walls.

  14. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  15. New measurements in plutonium L X ray emission spectrum using an electron probe micro-analyser

    International Nuclear Information System (INIS)

    Bobin, J.L.; Despres, J.

    1966-01-01

    Further studies by means of an electron-probe micro-analyser, allowed report CEA-R--1798 authors to set up a larger plutonium X ray spectrum table. Measurements of plutonium L II and L III levels excitation potentials have also been achieved. Some remarks about apparatus performance data (such as spectrograph sensibility, resolving power and accuracy) will be found in the appendix. (authors) [fr

  16. The activity-based methodology to assess ship emissions - A review

    International Nuclear Information System (INIS)

    Nunes, R.A.O.; Alvim-Ferraz, M.C.M.; Martins, F.G.; Sousa, S.I.V.

    2017-01-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories

  17. Impact of longevity on greenhouse gas emissions and profitability of individual dairy cows analysed with different system boundaries.

    Science.gov (United States)

    Grandl, F; Furger, M; Kreuzer, M; Zehetmeier, M

    2018-05-29

    Dairy production systems are often criticized as being major emitters of greenhouse gases (GHG). In this context, the extension of the length of the productive life of dairy cows is gaining interest as a potential GHG mitigation option. In the present study, we investigated cow and system GHG emission intensity and profitability based on data from 30 dairy cows of different productive lifetime fed either no or limited amounts of concentrate. Detailed information concerning productivity, feeding and individual enteric methane emissions of the individuals was available from a controlled experiment and herd book databases. A simplified GHG balance was calculated for each animal based on the milk produced at the time of the experiment and for their entire lifetime milk production. For the lifetime production, we also included the emissions arising from potential beef produced by fattening the offspring of the dairy cows. This accounted for the effect that changes in the length of productive life will affect the replacement rate and thus the number of calves that can be used for beef production. Profitability was assessed by calculating revenues and full economic costs for the cows in the data set. Both emission intensity and profitability were most favourable in cows with long productive life, whereas cows that had not finished their first lactation performed particularly unfavourably with regard to their emissions per unit of product and rearing costs were mostly not repaid. Including the potential beef production, GHG emissions in relation to total production of animal protein also decreased with age, but the overall variability was greater, as the individual cow history (lifetime milk yield, twin births, stillbirths, etc.) added further sources of variation. The present results show that increasing the length of productive life of dairy cows is a viable way to reduce the climate impact and to improve profitability of dairy production.

  18. Classification of hydromagnetic emissions based on frequency--time spectra

    International Nuclear Information System (INIS)

    Fukunishi, H.; Toya, T.; Koike, K.; Kuwashima, M.; Kawamura, M.

    1981-01-01

    By using 3035 hydromagnetic emission events observed in the frequency range of 0.1--2.0 Hz at Syowa (Lapprox.6), HM emissions have been classified into eight subtypes based on their spectral structures, i.e., HM whistler, periodic HM emission, HM chorus, HM emission burst, IPDP, morning IPDP, Pc 1--2 band, and irregular HM emission. It is seen that each subtype has a preferential magnetic local time interval and also a frequency range for its occurrence. Morning IPDP events and irregular HM emissions occur in the magnetic morning hours, while dispersive periodic HM emissions and HM emission bursts occur around magnetic local noon, then HM chorus emissions occur in the afternoon hours and IPDP events occur in the evening hours. Furthermore, it is noticed that the mid-frequencies of these emissions vary from high frequencies in the morning hours to low frequencies in the afternoon hours. On the basis of these results, the generation mechanisms of each subtype are discussed

  19. The activity-based methodology to assess ship emissions - A review.

    Science.gov (United States)

    Nunes, R A O; Alvim-Ferraz, M C M; Martins, F G; Sousa, S I V

    2017-12-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories for an

  20. Simulation-based Investigations of Electrostatic Beam Energy Analysers

    CERN Document Server

    Pahl, Hannes

    2015-01-01

    An energy analyser is needed to measure the beam energy profile behind the REX-EBIS at ISOLDE. The device should be able to operate with an accuracy of 1 V at voltages up to 30 kV. In order to find a working concept for an electrostatic energy analyser different designs were evaluated with simulations. A spherical device and its design issues are presented. The potential deformation effects of grids at high voltages and their influence on the energy resolution were investigated. First tests were made with a grid-free ring electrode device and show promising results.

  1. Techniques for Scaling Up Analyses Based on Pre-interpretations

    DEFF Research Database (Denmark)

    Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

    2005-01-01

    a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

  2. Exergy, exergoeconomic and environmental analyses and evolutionary algorithm based multi-objective optimization of combined cycle power plants

    International Nuclear Information System (INIS)

    Ahmadi, Pouria; Dincer, Ibrahim; Rosen, Marc A.

    2011-01-01

    A comprehensive exergy, exergoeconomic and environmental impact analysis and optimization is reported of several combined cycle power plants (CCPPs). In the first part, thermodynamic analyses based on energy and exergy of the CCPPs are performed, and the effect of supplementary firing on the natural gas-fired CCPP is investigated. The latter step includes the effect of supplementary firing on the performance of bottoming cycle and CO 2 emissions, and utilizes the first and second laws of thermodynamics. In the second part, a multi-objective optimization is performed to determine the 'best' design parameters, accounting for exergetic, economic and environmental factors. The optimization considers three objective functions: CCPP exergy efficiency, total cost rate of the system products and CO 2 emissions of the overall plant. The environmental impact in terms of CO 2 emissions is integrated with the exergoeconomic objective function as a new objective function. The results of both exergy and exergoeconomic analyses show that the largest exergy destructions occur in the CCPP combustion chamber, and that increasing the gas turbine inlet temperature decreases the CCPP cost of exergy destruction. The optimization results demonstrates that CO 2 emissions are reduced by selecting the best components and using a low fuel injection rate into the combustion chamber. -- Highlights: → Comprehensive thermodynamic modeling of a combined cycle power plant. → Exergy, economic and environmental analyses of the system. → Investigation of the role of multiobjective exergoenvironmental optimization as a tool for more environmentally-benign design.

  3. Global radioxenon emission inventory based on nuclear power reactor reports.

    Science.gov (United States)

    Kalinowski, Martin B; Tuma, Matthias P

    2009-01-01

    Atmospheric radioactivity is monitored for the verification of the Comprehensive Nuclear-Test-Ban Treaty, with xenon isotopes 131mXe, 133Xe, 133mXe and 135Xe serving as important indicators of nuclear explosions. The treaty-relevant interpretation of atmospheric concentrations of radioxenon is enhanced by quantifying radioxenon emissions released from civilian facilities. This paper presents the first global radioxenon emission inventory for nuclear power plants, based on North American and European emission reports for the years 1995-2005. Estimations were made for all power plant sites for which emission data were unavailable. According to this inventory, a total of 1.3PBq of radioxenon isotopes are released by nuclear power plants as continuous or pulsed emissions in a generic year.

  4. Combining emission inventory and isotope ratio analyses for quantitative source apportionment of heavy metals in agricultural soil.

    Science.gov (United States)

    Chen, Lian; Zhou, Shenglu; Wu, Shaohua; Wang, Chunhui; Li, Baojie; Li, Yan; Wang, Junxiao

    2018-08-01

    Two quantitative methods (emission inventory and isotope ratio analysis) were combined to apportion source contributions of heavy metals entering agricultural soils in the Lihe River watershed (Taihu region, east China). Source apportionment based on the emission inventory method indicated that for Cd, Cr, Cu, Pb, and Zn, the mean percentage input from atmospheric deposition was highest (62-85%), followed by irrigation (12-27%) and fertilization (1-14%). Thus, the heavy metals were derived mainly from industrial activities and traffic emissions. For Ni the combined percentage input from irrigation and fertilization was approximately 20% higher than that from atmospheric deposition, indicating that Ni was mainly derived from agricultural activities. Based on isotope ratio analysis, atmospheric deposition accounted for 57-93% of Pb entering soil, with the mean value of 69.3%, which indicates that this was the major source of Pb entering soil in the study area. The mean contributions of irrigation and fertilization to Pb pollution of soil ranged from 0% to 10%, indicating that they played only a marginally important role. Overall, the results obtained using the two methods were similar. This study provides a reliable approach for source apportionment of heavy metals entering agricultural soils in the study area, and clearly have potential application for future studies in other regions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. A Web-based Tool Combining Different Type Analyses

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program...... the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type....

  6. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  7. Environmental emissions by Chinese industry: Exergy-based unifying assessment

    International Nuclear Information System (INIS)

    Bo Zhang; Chen, G.Q.; Xia, X.H.; Li, S.C.; Chen, Z.M.; Xi Ji

    2012-01-01

    Based on chemical exergy as an objective measure for the chemical deviation between the emission and the environment, a unifying assessment is carried out for major environmental emissions covering COD, ammonia nitrogen, SO 2 , soot, dust, NO x and solid waste by Chinese industry over 1997–2006, with emphasis on the sectoral and regional levels in 2006. Of the total emission in exergy up to 274.1 PJ in 2006, 67.7% is estimated from waste gases, 29.9% from waste water and 2.4% from solid waste. Five of 40 sectors and 12 of 30 regions are responsible for 72.7% and 65.5% of the total emission, respectively. SO 2 is the leading emission type in 9 sectors and 25 regions, and COD in another 28 sectors and 5 regions. Some pollution-intensive sectors such as Production and Distribution of Electric Power and Heat Power and Manufacture of Paper and Paper Products, and western and inland regions such as Guangxi and Ningxia with high emission intensities are identified. By clustering and disjoint principal component analysis with intensities of emissions and fuel coal use as variables, three principal components are extracted, and four statistically significant clusters are pinpointed in the sectoral and regional analysis. Corresponding policy-making implications are addressed. - Highlights: ► A chemical exergy-based unifying assessment for industrial emissions is performed. ► The emissions at the sectoral/regional levels in 2006 are systematically revealed. ► The main principal components and clusters for emission intensities are pinpointed.

  8. Increased sensitivity in thick-target particle induced X-ray emission analyses using dry ashing for preconcentration

    International Nuclear Information System (INIS)

    Lill, J.-O.; Harju, L.; Saarela, K.-E.; Lindroos, A.; Heselius, S.-J.

    1999-01-01

    The sensitivity in thick-target particle induced X-ray emission (PIXE) analyses of biological materials can be enhanced by dry ashing. The gain depends mainly on the mass reduction factor and the composition of the residual ash. The enhancement factor was 7 for the certified reference material Pine Needles and the limits of detection (LODs) were below 0.2 μg/g for Zn, Cu, Rb and Sr. When ashing biological materials with low ash contents such as wood of pine or spruce (0.3% of dry weight) and honey (0.1% of wet weight) the gain was far greater. The LODs for these materials were 30 ng/g for wood and below 10 ng/g for honey. In addition, the ashed samples were more homogenous and more resistant to changes during the irradiation than the original biological samples. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. An Apple II -based bidimensional pulse height analyser

    International Nuclear Information System (INIS)

    Bateman, J.E.; Flesher, A.C.; Honeyman, R.N.; Pritchard, T.E.; Price, W.P.R.

    1984-06-01

    The implementation of a pulse height analyser function in an Apple II microcomputer using minimal purpose built hardware is described. Except for a small interface module the system consists of two suites of software, one giving a conventional one dimensional analysis on a span of 1024 channels, and the other a two dimensional analysis on a 128 x 128 image format. Using the recently introduced ACCELERATOR coprocessor card the system performs with a dead time per event of less than 50 μS. Full software facilities are provided for display, storage and processing of the data using standard Applesoft BASIC. (author)

  10. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  11. Sensitivity and uncertainty analyses in aging risk-based prioritizations

    International Nuclear Information System (INIS)

    Hassan, M.; Uryas'ev, S.; Vesely, W.E.

    1993-01-01

    Aging risk evaluations of nuclear power plants using Probabilistic Risk Analyses (PRAs) involve assessments of the impact of aging structures, systems, and components (SSCs) on plant core damage frequency (CDF). These assessments can be used to prioritize the contributors to aging risk reflecting the relative risk potential of the SSCs. Aging prioritizations are important for identifying the SSCs contributing most to plant risk and can provide a systematic basis on which aging risk control and management strategies for a plant can be developed. However, these prioritizations are subject to variabilities arising from uncertainties in data, and/or from various modeling assumptions. The objective of this paper is to present an evaluation of the sensitivity of aging prioritizations of active components to uncertainties in aging risk quantifications. Approaches for robust prioritization of SSCs also are presented which are less susceptible to the uncertainties

  12. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ Model–I: building an emissions data base

    Directory of Open Access Journals (Sweden)

    S. F. Mueller

    2010-05-01

    Full Text Available A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE emissions processing system currently estimates non-methane volatile organic compound (NMVOC emissions from biogenic sources, nitrogen oxide (NOx emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, windblown dust particulate, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide, 44% of total NOx, 80% of reactive carbonaceous gases (NMVOCs and carbon monoxide, 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride, and 84% of fine particles (i.e., those smaller than 2.5 μm in size released into the

  13. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) Model-I: building an emissions data base

    Science.gov (United States)

    Smith, S. N.; Mueller, S. F.

    2010-05-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates non-methane volatile organic compound (NMVOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, windblown dust particulate, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (NMVOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere

  14. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) model - Part 1: Building an emissions data base

    Science.gov (United States)

    Smith, S. N.; Mueller, S. F.

    2010-01-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates volatile organic compound (VOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as windblown dust and sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (VOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere. The seasonality and

  15. Inventory of atmospheric pollutant and greenhouse gas emissions in France. Sectoral series and extended analyses - SECTEN Format, April 2011

    International Nuclear Information System (INIS)

    Chang, Jean-Pierre; Fontelle, Jean-Pierre; Serveau, Laetitia; Allemand, Nadine; Jeannot, Coralie; Andre, Jean-Marc; Joya, Romain; Deflorenne, Emmanuel; Martinet, Yann; Druart, Ariane; Mathias, Etienne; Gavel, Antoine; Nicco, Laetitia; Gueguen, Celine; Prouteau, Emilie; Jabot, Julien; Tuddenham, Mark; Jacquier, Guillaume; Vincent, Julien

    2011-04-01

    2009. Regarding the greenhouse gases, the trend is rather directed in a light increase (2, 2 % between 2009 and 2010 for the CO 2 and 1, 4 % in terms of PRG) because of the year 2009 is very marked by a context of economic crisis and a resumption begun in 2010. The preliminary estimations for year 2010 should therefore be considered with caution because they need to be consolidated. The results are presented at national level for each of the main sectors defined in the SECTEN format. A more detailed breakdown of each main sector is provided for the period 1990-2009. Results also focus on the different energy products and several analyses provide additional information on NMVOCs, PAHs HFCs, PFCs, global warming potential and particular sources, such as transport and off-road mobile sources (generators, machinery and vehicles used in construction, industry, agriculture and forestry, as well as household and gardening machinery). The report contains indications regarding the targets to which France has committed itself under international conventions and EU directives, in particular for climate change and for transboundary air pollution and air quality. These results show that on the whole emission trends observed are encouraging and especially reflect the reduction actions implemented. The table below summarises total emissions over the period 1990-2010 for all the above mentioned substances, as well as indicators concerning acidification and the greenhouse effect

  16. Evaluation of an optoacoustic based gas analysing device

    Science.gov (United States)

    Markmann, Janine; Lange, Birgit; Theisen-Kunde, Dirk; Danicke, Veit; Mayorov, Fedor; Eckert, Sebastian; Kettmann, Pascal; Brinkmann, Ralf

    2017-07-01

    The relative occurrence of volatile organic compounds in the human respiratory gas is disease-specific (ppb range). A prototype of a gas analysing device using two tuneable laser systems, an OPO-laser (2.5 to 10 μm) and a CO2-laser (9 to 11 μm), and an optoacoustic measurement cell was developed to detect concentrations in the ppb range. The sensitivity and resolution of the system was determined by test gas measurements, measuring ethylene and sulfur hexafluoride with the CO2-laser and butane with the OPO-laser. System sensitivity found to be 13 ppb for sulfur hexafluoride, 17 ppb for ethylene and Respiratory gas samples of 8 healthy volunteers were investigated by irradiation with 17 laser lines of the CO2-laser. Several of those lines overlap with strong absorption bands of ammonia. As it is known that ammonia concentration increases by age a separation of people 35 was striven for. To evaluate the data the first seven gas samples were used to train a discriminant analysis algorithm. The eighth subject was then assigned correctly to the group >35 years with the age of 49 years.

  17. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  18. Visualizing Confidence in Cluster-Based Ensemble Weather Forecast Analyses.

    Science.gov (United States)

    Kumpf, Alexander; Tost, Bianca; Baumgart, Marlene; Riemer, Michael; Westermann, Rudiger; Rautenhaus, Marc

    2018-01-01

    In meteorology, cluster analysis is frequently used to determine representative trends in ensemble weather predictions in a selected spatio-temporal region, e.g., to reduce a set of ensemble members to simplify and improve their analysis. Identified clusters (i.e., groups of similar members), however, can be very sensitive to small changes of the selected region, so that clustering results can be misleading and bias subsequent analyses. In this article, we - a team of visualization scientists and meteorologists-deliver visual analytics solutions to analyze the sensitivity of clustering results with respect to changes of a selected region. We propose an interactive visual interface that enables simultaneous visualization of a) the variation in composition of identified clusters (i.e., their robustness), b) the variability in cluster membership for individual ensemble members, and c) the uncertainty in the spatial locations of identified trends. We demonstrate that our solution shows meteorologists how representative a clustering result is, and with respect to which changes in the selected region it becomes unstable. Furthermore, our solution helps to identify those ensemble members which stably belong to a given cluster and can thus be considered similar. In a real-world application case we show how our approach is used to analyze the clustering behavior of different regions in a forecast of "Tropical Cyclone Karl", guiding the user towards the cluster robustness information required for subsequent ensemble analysis.

  19. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Science.gov (United States)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  20. Directional Canopy Emissivity Estimation Based on Spectral Invariants

    Science.gov (United States)

    Guo, M.; Cao, B.; Ren, H.; Yongming, D.; Peng, J.; Fan, W.

    2017-12-01

    Land surface emissivity is a crucial parameter for estimating land surface temperature from remote sensing data and also plays an important role in the physical process of surface energy and water balance from local to global scales. To our knowledge, the emissivity varies with surface type and cover. As for the vegetation, its canopy emissivity is dependent on vegetation types, viewing zenith angle and structure that changes in different growing stages. Lots of previous studies have focused on the emissivity model, but few of them are analytic and suited to different canopy structures. In this paper, a new physical analytic model is proposed to estimate the directional emissivity of homogenous vegetation canopy based on spectral invariants. The initial model counts the directional absorption in six parts: the direct absorption of the canopy and the soil, the absorption of the canopy and soil after a single scattering and after multiple scattering within the canopy-soil system. In order to analytically estimate the emissivity, the pathways of photons absorbed in the canopy-soil system are traced using the re-collision probability in Fig.1. After sensitive analysis on the above six absorptions, the initial complicated model was further simplified as a fixed mathematic expression to estimate the directional emissivity for vegetation canopy. The model was compared with the 4SAIL model, FRA97 model, FRA02 model and DART model in Fig.2, and the results showed that the FRA02 model is significantly underestimated while the FRA97 model is a little underestimated, on basis of the new model. On the contrary, the emissivity difference between the new model with the 4SAIL model and DART model was found to be less than 0.002. In general, since the new model has the advantages of mathematic expression with accurate results and clear physical meaning, the model is promising to be extended to simulate the directional emissivity for the discrete canopy in further study.

  1. Analysing co-articulation using frame-based feature trajectories

    CSIR Research Space (South Africa)

    Badenhorst, J

    2010-11-01

    Full Text Available The authors investigate several approaches aimed at a more detailed understanding of co-articulation in spoken utterances. They find that the Euclidean difference between instantaneous frame-based feature values and the mean values of these features...

  2. PCR and RFLP analyses based on the ribosomal protein operon

    Science.gov (United States)

    Differentiation and classification of phytoplasmas have been primarily based on the highly conserved 16Sr RNA gene. RFLP analysis of 16Sr RNA gene sequences has identified 31 16Sr RNA (16Sr) groups and more than 100 16Sr subgroups. Classification of phytoplasma strains can however, become more refin...

  3. Analysing Leontiev Tube Capabilities in the Space-based Plants

    Directory of Open Access Journals (Sweden)

    N. L. Shchegolev

    2017-01-01

    Full Text Available The paper presents a review of publications dedicated to the gas-dynamic temperature stratification device (the Leontief tube and shows main factors affecting its efficiency. Describes an experimental installation, which is used to obtain data on the value of energy separation in the air to prove this device the operability.The assumption that there is an optimal relationship between the flow velocities in the subsonic and supersonic channels of the gas-dynamic temperature stratification device is experimentally confirmed.The paper conducts analysis of possible ways to raise the efficiency of power plants of various (including space basing, and shows that, currently, a mainstream of increasing efficiency of their operation is to complicate design solutions.A scheme of the closed gas-turbine space-based plant using a mixture of inert gases (helium-xenon one for operation is proposed. What differs it from the simplest variants is a lack of the cooler-radiator and integration into gas-dynamic temperature stratification device and heat compressor.Based on the equations of one-dimensional gas dynamics, it is shown that the total pressure restorability when removing heat in a thermal compressor determines operating capability of this scheme. The exploratory study of creating a heat compressor is performed, and it is shown that when operating on gases with a Prandtl number close to 1, the total pressure does not increase.The operating capability conditions of the heat compressor are operation on gases with a low value of the Prandtl number (helium-xenon mixture at high supersonic velocities and with a longitudinal pressure gradient available.It is shown that there is a region of the low values of the Prandtl number (Pr <0.3 for which, with the longitudinal pressure gradient available in the supersonic flows of a viscous gas, the total pressure can be restored.

  4. Design of the storage location based on the ABC analyses

    Science.gov (United States)

    Jemelka, Milan; Chramcov, Bronislav; Kříž, Pavel

    2016-06-01

    The paper focuses on process efficiency and saving storage costs. Maintaining inventory through putaway strategy takes personnel time and costs money. The aim is to control inventory in the best way. The ABC classification based on Villefredo Pareto theory is used for a design of warehouse layout. New design of storage location reduces the distance of fork-lifters, total costs and it increases inventory process efficiency. The suggested solutions and evaluation of achieved results are described in detail. Proposed solutions were realized in real warehouse operation.

  5. Parametric and Wavelet Analyses of Acoustic Emission Signals for the Identification of Failure Modes in CFRP Composites Using PZT and PVDF Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Prasopchaichana, Kritsada; Kwon, Oh Yang [Inha University, Incheon (Korea, Republic of)

    2007-12-15

    Combination of the parametric and the wavelet analyses of acoustic emission (AE) signals was applied to identify the failure modes in carbon fiber reinforced plastic (CFRP) composite laminates during tensile testing. AE signals detected by surface mounted lead-zirconate-titanate (PZT) and polyvinylidene fluoride (PVDF) sensors were analyzed by parametric analysis based on the time of occurrence which classifies AE signals corresponding to failure modes. The frequency band level-energy analysis can distinguish the dominant frequency band for each failure mode. It was observed that the same type of failure mechanism produced signals with different characteristics depending on the stacking sequences and the type of sensors. This indicates that the proposed method can identify the failure modes of the signals if the stacking sequences and the sensors used are known

  6. Economic evaluation of algae biodiesel based on meta-analyses

    Science.gov (United States)

    Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.

    2017-08-01

    The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.

  7. Mechanical Seal Opening Condition Monitoring Based on Acoustic Emission Technology

    Directory of Open Access Journals (Sweden)

    Erqing Zhang

    2014-06-01

    Full Text Available Since the measurement of mechanical sealing film thickness and just-lift-off time is very difficult, the sealing film condition monitoring method based on acoustic emission signal is proposed. The mechanical seal acoustic emission signal present obvious characteristics of time-varying nonlinear and pulsating. In this paper, the acoustic emission signal is used to monitor the seal end faces just-lift-off time and friction condition. The acoustic emission signal is decomposed by empirical mode decomposition into a series of intrinsic mode function with independent characteristics of different time scales and different frequency band. The acoustic emission signal only generated by end faces friction is obtained by eliminating the false intrinsic mode function components. The correlation coefficient of acoustic emission signal and Multi-scale Laplace Wavelet is calculated. It is proved that the maximum frequency (8000 Hz of the correlation coefficient is appeared at the spindle speed of 300 rpm. And at this time (300 rpm the end faces have just lifted off. By a set of mechanical oil seal running test, it is demonstrated that this method could accurately identify mechanical seal end faces just-lift-off time and friction condition.

  8. Evidence for Endothermy in Pterosaurs Based on Flight Capability Analyses

    Science.gov (United States)

    Jenkins, H. S.; Pratson, L. F.

    2005-12-01

    Previous attempts to constrain flight capability in pterosaurs have relied heavily on the fossil record, using bone articulation and apparent muscle allocation to evaluate flight potential (Frey et al., 1997; Padian, 1983; Bramwell, 1974). However, broad definitions of the physical parameters necessary for flight in pterosaurs remain loosely defined and few systematic approaches to constraining flight capability have been synthesized (Templin, 2000; Padian, 1983). Here we present a new method to assess flight capability in pterosaurs as a function of humerus length and flight velocity. By creating an energy-balance model to evaluate the power required for flight against the power available to the animal, we derive a `U'-shaped power curve and infer optimal flight speeds and maximal wingspan lengths for pterosaurs Quetzalcoatlus northropi and Pteranodon ingens. Our model corroborates empirically derived power curves for the modern black-billed magpie ( Pica Pica) and accurately reproduces the mechanical power curve for modern cockatiels ( Nymphicus hollandicus) (Tobalske et al., 2003). When we adjust our model to include an endothermic metabolic rate for pterosaurs, we find a maximal wingspan length of 18 meters for Q. northropi. Model runs using an exothermic metabolism derive maximal wingspans of 6-8 meters. As estimates based on fossil evidence show total wingspan lengths reaching up to 15 meters for Q. northropi, we conclude that large pterosaurs may have been endothermic and therefore more metabolically similar to birds than to reptiles.

  9. Characteristics of On-road Diesel Vehicles: Black Carbon Emissions in Chinese Cities Based on Portable Emissions Measurement.

    Science.gov (United States)

    Zheng, Xuan; Wu, Ye; Jiang, Jingkun; Zhang, Shaojun; Liu, Huan; Song, Shaojie; Li, Zhenhua; Fan, Xiaoxiao; Fu, Lixin; Hao, Jiming

    2015-11-17

    Black carbon (BC) emissions from heavy-duty diesel vehicles (HDDVs) are rarely continuously measured using portable emission measurement systems (PEMSs). In this study, we utilize a PEMS to obtain real-world BC emission profiles for 25 HDDVs in China. The average fuel-based BC emissions of HDDVs certified according to Euro II, III, IV, and V standards are 2224 ± 251, 612 ± 740, 453 ± 584, and 152 ± 3 mg kg(-1), respectively. Notably, HDDVs adopting mechanical pump engines had significantly higher BC emissions than those equipped with electronic injection engines. Applying the useful features of PEMSs, we can relate instantaneous BC emissions to driving conditions using an operating mode binning methodology, and the average emission rates for Euro II to Euro IV diesel trucks can be constructed. From a macroscopic perspective, we observe that average speed is a significant factor affecting BC emissions and is well correlated with distance-based emissions (R(2) = 0.71). Therefore, the average fuel-based and distance-based BC emissions on congested roads are 40 and 125% higher than those on freeways. These results should be taken into consideration in future emission inventory studies.

  10. FY01 Supplemental Science and Performance Analyses, Volume 1: Scientific Bases and Analyses, Part 1 and 2

    International Nuclear Information System (INIS)

    Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23 013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate

  11. Highly charged ion based time-of-flight emission microscope

    International Nuclear Information System (INIS)

    Hamza, Alex V.; Barnes, Alan V.; Magee, Ed; Newman, Mike; Schenkel, Thomas; McDonald, Joseph W.; Schneider, Dieter H.

    2000-01-01

    An emission microscope using highly charged ions as the excitation source has been designed, constructed, and operated. A novel ''acorn'' objective lens has been used to simultaneously image electron and secondary ion emission. A resistive anode-position sensitive detector is used to determine the x-y position and time of arrival of the secondary events at the microscope image plane. Contrast in the image can be based on the intensity of the electron emission and/or the presence of particular secondary ions. Spatial resolution of better than 1 μm and mass resolution m/Δm of better than 400 were demonstrated. Background rejection from uncorrelated events of greater than an order of magnitude is also achieved. (c) 2000 American Institute of Physics

  12. Design of acoustic emission monitoring system based on VC++

    Science.gov (United States)

    Yu, Yang; He, Wei

    2015-12-01

    At present, a lot of companies at home and abroad have researched and produced a batch of specialized monitoring instruments for acoustic emission (AE). Most of them cost highly and the system function exists in less stable and less portability for the testing environment and transmission distance and other aspects. Depending on the research background and the status quo, a dual channel intelligent acoustic emission monitoring system was designed based on Microsoft Foundation Classes in Visual Studio C++ to solve some of the problems in the acoustic emission research and meet the needs of actual monitoring task. It contains several modules such as main module, acquisition module, signal parameters setting module and so on. It could give out corrosion AE waveform and signal parameters results according to the main menu selected parameters. So the needed information could be extracted from the experiments datum to solve the problem deeply. This soft system is the important part of AE detection g system.

  13. Life cycle assessment of energy consumption and environmental emissions for cornstalk-based ethyl levulinate

    International Nuclear Information System (INIS)

    Wang, Zhiwei; Li, Zaifeng; Lei, Tingzhou; Yang, Miao; Qi, Tian; Lin, Lu; Xin, Xiaofei; Ajayebi, Atta; Yang, Yantao; He, Xiaofeng; Yan, Xiaoyu

    2016-01-01

    Highlights: • The first LCA of cornstalk-based ethyl levulinate. • Life cycle energy consumption and environmental emissions were evaluated. • Detailed foreground data from a demonstration project in China was used. • Criteria emissions in the combustion stage were based on engine tests. • Sensitivity analysis was performed based on different cornstalk prices. - Abstract: This study analysed the sustainability of fuel-ethyl levulinate (EL) production along with furfural, as a by-product, from cornstalk in China. A life cycle assessment (LCA) was conducted using the SimaPro software to evaluate the energy consumption (EC), greenhouse gas (GHG) and criteria emissions, from cornstalk growth to EL utilisation. The total life cycle EC was found to be 4.54 MJ/MJ EL, of which 94.7% was biomass energy. EC in the EL production stage was the highest, accounting for 96.8% of total EC. Fossil EC in this stage was estimated to be 0.095 MJ/MJ, which also represents the highest fossil EC throughout the life cycle (39.5% of the total). The ratio of biomass to fossil EC over the life cycle was 17.9, indicating good utilisation of renewable energy in cornstalk-based EL production. The net life cycle GHG emissions were 96.6 g CO_2-eq/MJ. The EL production stage demonstrated the highest GHG emissions, representing 53.4% of the total positive amount. Criteria emissions of carbon monoxide (CO) and particulates ⩽10 μm (PM10) showed negative values, of −3.15 and −0.72 g/MJ, respectively. Nitrogen oxides (NO_x) and sulphur dioxide (SO_2) emissions showed positive values of 0.33 and 0.28 g/MJ, respectively, mainly arising from the EL production stage. According to the sensitivity analysis, increasing or removing the cornstalk revenue in the LCA leads to an increase or decrease in the EC and environmental emissions while burning cornstalk directly in the field results in large increases in emissions of NMVOC, CO, NO_x and PM10 but decreases in fossil EC, and SO_2 and GHG

  14. Orbitrap-based mass analyser for in-situ characterization of asteroids: ILMA, Ion Laser Mass Analyser

    Science.gov (United States)

    Briois, C.; Cotti, H.; Thirkell, L.; Space Orbitrap Consortium[K. Aradj, French; Bouabdellah, A.; Boukrara, A.; Carrasco, N.; Chalumeau, G.; Chapelon, O.; Colin, F.; Coll, P.; Engrand, C.; Grand, N.; Kukui, A.; Lebreton, J.-P.; Pennanech, C.; Szopa, C.; Thissen, R.; Vuitton, V.; Zapf], P.; Makarov, A.

    2014-07-01

    Since about a decade the boundaries between comets and carbonaceous asteroids are fading [1,2]. No doubt that the Rosetta mission should bring a new wealth of data on the composition of comets. But as promising as it may look, the mass resolving power of the mass spectrometers onboard (so far the best on a space mission) will only be able to partially account for the diversity of chemical structures present. ILMA (Ion-Laser Mass Analyser) is a new generation high mass resolution LDI-MS (Laser Desorption-Ionization Mass Spectrometer) instrument concept using the Orbitrap technique, which has been developed in the frame of the two Marco Polo & Marco Polo-R proposals to the ESA Cosmic Vision program. Flagged by ESA as an instrument concept of interest for the mission in 2012, it has been under study for a few years in the frame of a Research and Technology (R&T) development programme between 5 French laboratories (LPC2E, IPAG, LATMOS, LISA, CSNSM) [3,4], partly funded by the French Space Agency (CNES). The work is undertaken in close collaboration with the Thermo Fisher Scientific Company, which commercialises Orbitrap-based laboratory instruments. The R&T activities are currently concentrating on the core elements of the Orbitrap analyser that are required to reach a sufficient maturity level for allowing design studies of future space instruments. A prototype is under development at LPC2E and a mass resolution (m/Δm FWHM) of 100,000 as been obtained at m/z = 150 for a background pressure of 10^{-8} mbar. ILMA would be a key instrument to measure the molecular, elemental and isotopic composition of objects such as carbonaceous asteroids, comets, or other bodies devoid of atmosphere such as the surface of an icy satellite, the Moon, or Mercury.

  15. Sources of spontaneous emission based on indium arsenide

    International Nuclear Information System (INIS)

    Zotova, N. V.; Il'inskaya, N. D.; Karandashev, S. A.; Matveev, B. A.; Remennyi, M. A.; Stus', N. M.

    2008-01-01

    The results obtained for light-emitting diodes based on heterostructures that contain InAs in the active region and are grown by the methods of liquid-phase, molecular-beam, and vapor-phase epitaxy from organometallic compounds are reviewed. The emission intensity, the near-field patterns, and the light-current and current-voltage characteristics of light-emitting diodes that have flip-chip structure or feature a point contact are analyzed.

  16. Sources of spontaneous emission based on indium arsenide

    Energy Technology Data Exchange (ETDEWEB)

    Zotova, N V; Il' inskaya, N D; Karandashev, S A; Matveev, B. A., E-mail: bmat@iropt3.ioffe.rssi.ru; Remennyi, M A; Stus' , N M [Russian Academy of Sciences, Ioffe Physicotechnical Institute (Russian Federation)

    2008-06-15

    The results obtained for light-emitting diodes based on heterostructures that contain InAs in the active region and are grown by the methods of liquid-phase, molecular-beam, and vapor-phase epitaxy from organometallic compounds are reviewed. The emission intensity, the near-field patterns, and the light-current and current-voltage characteristics of light-emitting diodes that have flip-chip structure or feature a point contact are analyzed.

  17. Output-based allocations and revenue recycling. Implications for the New Zealand Emissions Trading Scheme

    International Nuclear Information System (INIS)

    Lennox, James A.; Nieuwkoop, Renger van

    2010-01-01

    The New Zealand Emissions Trading Scheme (NZ ETS) is more comprehensive in its coverage of emissions than schemes introduced or proposed to date in any other country in that it includes agricultural greenhouse gases, which account for half of New Zealand's total emissions. But, motivated by concerns for the international competitiveness of emissions-intensive, trade-exposed industrial and agricultural activities, current legislation provides for substantial ongoing free allocations to such activities, linked to their output. Here we use a computable general equilibrium model to analyse the impacts of output-based allocation, given the possibility of recycling net revenues to reduce prior distorting taxes. Unlike previous modelling studies of alternative NZ ETS designs, we allow for a more realistic modelling both of capital and labour supply. We find that, as suggested by theoretical results, interactions between the ETS and existing taxes are important. Given any level of output-based allocation, the negative macroeconomic impacts can be reduced by recycling net revenues as efficiently as possible. Less obviously, we find that there may be an optimal non-zero level of output-based allocation. This optimal level increases as the carbon price and/or factor supply elasticities increase, but decreases if revenues are recycled with greater efficiency. (author)

  18. Output-based allocations and revenue recycling. Implications for the New Zealand Emissions Trading Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Lennox, James A. [Landcare Research NZ, Lincoln (New Zealand); Nieuwkoop, Renger van [Center for Energy Policy and Economy, Zuerich (Switzerland)

    2010-12-15

    The New Zealand Emissions Trading Scheme (NZ ETS) is more comprehensive in its coverage of emissions than schemes introduced or proposed to date in any other country in that it includes agricultural greenhouse gases, which account for half of New Zealand's total emissions. But, motivated by concerns for the international competitiveness of emissions-intensive, trade-exposed industrial and agricultural activities, current legislation provides for substantial ongoing free allocations to such activities, linked to their output. Here we use a computable general equilibrium model to analyse the impacts of output-based allocation, given the possibility of recycling net revenues to reduce prior distorting taxes. Unlike previous modelling studies of alternative NZ ETS designs, we allow for a more realistic modelling both of capital and labour supply. We find that, as suggested by theoretical results, interactions between the ETS and existing taxes are important. Given any level of output-based allocation, the negative macroeconomic impacts can be reduced by recycling net revenues as efficiently as possible. Less obviously, we find that there may be an optimal non-zero level of output-based allocation. This optimal level increases as the carbon price and/or factor supply elasticities increase, but decreases if revenues are recycled with greater efficiency. (author)

  19. Direct multielement trace analyses of silicon carbide powders by spark ablation simultaneous inductively coupled plasma optical emission spectrometry

    International Nuclear Information System (INIS)

    Kiera, Arne F.; Schmidt-Lehr, Sebastian; Song, Ming; Bings, Nicolas H.; Broekaert, Jose A.C.

    2008-01-01

    A procedure for the direct analysis of silicon carbide powders (SiC) by simultaneous detection inductively coupled plasma optical emission spectrometry using a Spectro-CIROS TM spectrometer (CCD-ICP-OES) and a novel spark ablation system Spectro-SASSy (SA) as sample introduction technique is described. The sample preparation procedure for SA of non-conducting material is based on mixing the sample powders with a conducting matrix, in this case copper and briquetting pellets. Pressing time, pressure and mixing ratio are shown to be important parameters of the pelleting technique with respect to their mechanical stability for the reliability of the analysis results. A mixing ratio of 0.2 g +0.6 g for SiC and Cu, a pressure of 10 t cm -2 and a pressing time of 8 min have been found optimum. It has also been shown that the spark parameters selected are crucial for uniform volatilization. Electron probe micrographs of the burning spots and the analytical signal magnitude showed that a rather hard spark at 100 Hz was optimum. The determination of trace elements in silicon carbide powders is demonstrated using a calibration based on the addition of standard solutions. For Al, Ti, V, Mn and Fe detection limits in the lower μg g -1 range can be achieved. Internal standardization with Y in combination with the addition of standard solutions allows relative standard deviations in the range of 4 to 24% for concentration levels of the order of 3 to 350 μg g -1

  20. Direct multielement trace analyses of silicon carbide powders by spark ablation simultaneous inductively coupled plasma optical emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Kiera, Arne F.; Schmidt-Lehr, Sebastian; Song, Ming [Institute for Inorganic and Applied Chemistry, University of Hamburg, Martin-Luther-King-Platz 6, D-20146 Hamburg (Germany); Bings, Nicolas H. [Institute for Inorganic and Applied Chemistry, University of Hamburg, Martin-Luther-King-Platz 6, D-20146 Hamburg (Germany)], E-mail: bings@chemie.uni-hamburg.de; Broekaert, Jose A.C. [Institute for Inorganic and Applied Chemistry, University of Hamburg, Martin-Luther-King-Platz 6, D-20146 Hamburg (Germany)

    2008-02-15

    A procedure for the direct analysis of silicon carbide powders (SiC) by simultaneous detection inductively coupled plasma optical emission spectrometry using a Spectro-CIROS{sup TM} spectrometer (CCD-ICP-OES) and a novel spark ablation system Spectro-SASSy (SA) as sample introduction technique is described. The sample preparation procedure for SA of non-conducting material is based on mixing the sample powders with a conducting matrix, in this case copper and briquetting pellets. Pressing time, pressure and mixing ratio are shown to be important parameters of the pelleting technique with respect to their mechanical stability for the reliability of the analysis results. A mixing ratio of 0.2 g +0.6 g for SiC and Cu, a pressure of 10 t cm{sup -2} and a pressing time of 8 min have been found optimum. It has also been shown that the spark parameters selected are crucial for uniform volatilization. Electron probe micrographs of the burning spots and the analytical signal magnitude showed that a rather hard spark at 100 Hz was optimum. The determination of trace elements in silicon carbide powders is demonstrated using a calibration based on the addition of standard solutions. For Al, Ti, V, Mn and Fe detection limits in the lower {mu}g g{sup -1} range can be achieved. Internal standardization with Y in combination with the addition of standard solutions allows relative standard deviations in the range of 4 to 24% for concentration levels of the order of 3 to 350 {mu}g g{sup -1}.

  1. Inventory of atmospheric pollutants emissions in France - sectorial series and extensive analysis; Inventaire des emissions de polluants atmospheriques en France - serie sectorielles et analyses etendues

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-02-01

    The present report supplies an update of emissions into the atmosphere within the french metropolitan area in the frame of the CORALIE programme according to the 'SECTEN' format defined by CITEPA which raises to report emissions relating to usual economic entities such as industry residential/tertiary, agriculture, etc. Basically, results deal systematically with the period 1990 - 2001, but cover also more extended ranges of time, especially from 1980 for some substances to be considered in the frame of the protocols under the convention on the long range transboundary air pollution and from 1960 for some other substances traditionally inventoried by CITEPA for a long time. Data are presented for 28 different substances and various indicators such as those related to acidification or greenhouse effect. It is observed for most of substances that emissions have been drastically reduced over the ten or twenty last years. More especially for the period 1990 - 2000: The results are presented at the national level for each of the main sectors defined in the SECTEN format. A more detailed breakdown of each main sector is provided for 2000. Results are also given relating to different energy products and several analysis present additional highlights for NMVOCs, HFCs, PFCs and particular sources such as transports. The report contains indications regarding the targets committed by France in the frame of international conventions and directives from the European Union directives. These indications demonstrate that emission trends observed are generally encouraging. The table below summarizes total emissions over the period 1990 - 2001 for all substances mentioned above as well as indicators relating to the acidification and to the greenhouse effect. (author)

  2. Positron emission tomography study of pindolol occupancy of 5-HT1A receptors in humans: preliminary analyses

    International Nuclear Information System (INIS)

    Martinez, Diana; Mawlawi, Osama; Hwang, Dah-Ren; Kent, Justine; Simpson, Norman; Parsey, Ramin V.; Hashimoto, Tomoki; Slifstein, Mark; Huang Yiyun; Heertum, Ronald van; Abi-Dargham, Anissa; Caltabiano, Stephen; Malizia, Andrea; Cowley, Hugh; Mann, J. John; Laruelle, Marc

    2000-01-01

    Preclinical studies in rodents suggest that augmentation of serotonin reuptake inhibitors (SSRIs) therapy by the 5-hydroxytryptamine 1A (5-HT 1A ) receptor agent pindolol might reduce the delay between initiation of treatment and antidepressant response. This hypothesis is based on the ability of pindolol to potentiate the increase in serotonin (5-HT) transmission induced by SSRIs, an effect achieved by blockade of the 5-HT 1A autoreceptors in the dorsal raphe nuclei (DRN). However, placebo-controlled clinical studies of pindolol augmentation of antidepressant therapy have reported inconsistent results. Here, we evaluated the occupancy of 5-HT 1A receptors following treatment with controlled release pindolol in nine healthy volunteers with positron-emission tomography (PET). Each subject was studied four times: at baseline (scan 1), following 1 week of oral administration of pindolol CR (7.5 mg/day) at peak level, 4 h after the dose (scan 2), and at 10 h following the dose (scan 3), and following one dose of pindolol CR (30 mg) (at peak level, 4 h) (scan 4). Pindolol occupancy of 5-HT 1A receptors was evaluated in the DRN and cortical regions as the decrease in binding potential (BP) of the radiolabelled selective 5-HT 1A antagonist [carbonyl- 11 C]WAY-100635 or [carbonyl- 11 C] N-(2-(4-(2-methoxyphenyl)-1-piperazinyl)ethyl)-N-(2-pyridyl) cyclohexanecarboxamide abbreviated as [ 11 C]WAY-100635. Pindolol dose-dependently decreased [ 11 C]WAY-100635 BP. Combining all the regions, occupancy was 20 ± 8% at scan 2, 14 ± 8% at scan 3, and 44 ± 8% at scan 4. The results of this study suggest that at doses used in clinical studies of augmentation of the SSRI effect by pindolol (2.5 mg t.i.d.), the occupancy of 5-HT 1A receptors is moderate and highly variable between subjects. This factor might explain the variable results obtained in clinical studies. On the other hand, at each dose tested, pindolol occupancy of 5-HT 1A receptors was higher in the DRN compared to

  3. IMPACT DE L’URBANISATION SUR LES EMISSIONS DE CO2: ANALYSE EMPIRIQUE POUR LES PAYS D’AFRIQUE SUBSAHARIENNE

    Directory of Open Access Journals (Sweden)

    Nathan Roger Lea Jombi

    2014-01-01

    Full Text Available The relationship between urbanization and CO2 emissions has been the subject of much discussion over the past two decades. Most empirical studies addressed the issue under the environmental Kuznet-curve (EKC framework and find evidence of an inverted-U shape path that CO2 emissions follow as the level of urbanization rises. Yet, more recent studies suggest that the EKC framework may be inadequate, and that the EKC parameter estimates may be dependent on the sample used. The present study contributes to the literature by examining the impact of urbanization on CO2 emissions in sub-Saharan African countries. We use panel data over the period 1970- 2010 and a Stochastic Impacts by Regressions on Population, Affluence and Technology (STIRPAT model. We find that evidence of the EKC pathway is not robust.

  4. High-global warming potential F-gas emissions in California: comparison of ambient-based versus inventory-based emission estimates, and implications of refined estimates.

    Science.gov (United States)

    Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy

    2014-01-21

    To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.

  5. Complementary Exploratory and Confirmatory Factor Analyses of the French WISC-V: Analyses Based on the Standardization Sample.

    Science.gov (United States)

    Lecerf, Thierry; Canivez, Gary L

    2017-12-28

    Interpretation of the French Wechsler Intelligence Scale for Children-Fifth Edition (French WISC-V; Wechsler, 2016a) is based on a 5-factor model including Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS). Evidence for the French WISC-V factorial structure was established exclusively through confirmatory factor analyses (CFAs). However, as recommended by Carroll (1995); Reise (2012), and Brown (2015), factorial structure should derive from both exploratory factor analysis (EFA) and CFA. The first goal of this study was to examine the factorial structure of the French WISC-V using EFA. The 15 French WISC-V primary and secondary subtest scaled scores intercorrelation matrix was used and factor extraction criteria suggested from 1 to 4 factors. To disentangle the contribution of first- and second-order factors, the Schmid and Leiman (1957) orthogonalization transformation (SLT) was applied. Overall, no EFA evidence for 5 factors was found. Results indicated that the g factor accounted for about 67% of the common variance and that the contributions of the first-order factors were weak (3.6 to 11.9%). CFA was used to test numerous alternative models. Results indicated that bifactor models produced better fit to these data than higher-order models. Consistent with previous studies, findings suggested dominance of the general intelligence factor and that users should thus emphasize the Full Scale IQ (FSIQ) when interpreting the French WISC-V. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Development of proton-induced x-ray emission techniques with application to multielement analyses of human autopsy tissues and obsidian artifacts

    International Nuclear Information System (INIS)

    Nielson, K.K.

    1975-01-01

    A method of trace element analysis using proton-induced x-ray emission (PIXE) techniques with energy dispersive x-ray detection methods is described. Data were processed using the computer program ANALEX. PIXE analysis methods were applied to the analysis of liver, spleen, aorta, kidney medulla, kidney cortex, abdominal fat, pancreas, and hair from autopsies of Pima Indians. Tissues were freeze dried and low temperature ashed before analysis. Concentrations were tabulated for K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Pb, Se, Br, Rb, Sr, Cd, and Cs and examined for significant differences related to diabetes. Concentrations of Ca and Sr in aorta, Fe and Rb in spleen and Mn in liver had different patterns in diabetics than in nondiabetics. High Cs concentrations were also observed in the kidneys of two subjects who died of renal disorders. Analyses by atomic absorption and PIXE methods were compared. PIXE methods were also applied to elemental analysis of obsidian artifacts from Campeche, Mexico. Based on K, Ba, Mn, Fe, Rb, Sr and Zr concentrations, the artifacts were related to several Guatemalan sources. (Diss. Abstr. Int., B)

  7. Modification of Optical Properties of Seawater Exposed to Oil Contaminants Based on Excitation-Emission Spectra

    Science.gov (United States)

    Baszanowska, E.; Otremba, Z.

    2015-10-01

    The optical behaviour of seawater exposed to a residual amount of oil pollution is presented and a comparison of the fluorescence spectra of oil dissolved in both n-hexane and seawater is discussed based on excitation-emission spectra. Crude oil extracted from the southern part of the Baltic Sea was used to characterise petroleum properties after contact with seawater. The wavelength-independent fluorescence maximum for natural seawater and seawater artificially polluted with oil were determined. Moreover, the specific excitation-emission peaks for natural seawater and polluted water were analysed to identify the natural organic matter composition. It was found that fluorescence spectra identification is a promising method to detect even an extremely low concentration of petroleum residues directly in the seawater. In addition, alien substances disturbing the fluorescence signatures of natural organic substances in a marine environment is also discussed.

  8. Explosive emission cathode on the base of carbon plastic fibre

    International Nuclear Information System (INIS)

    Korenev, S.A.; Baranov, A.M.; Kostyuchenko, S.V.; Chernenko, N.M.

    1989-01-01

    A fabrication process for explosive emission cathodes on the base of carbon plastic fibre of practically any geometrical shape and dimensions is developed. Experimental studies of electron beam current collection from cathodes, 2cm in diameter, at voltages across the diode of 10 and 150-250kV. It is shown that the ignition voltage for cathode plasma is ∼2kV at the interelectrode diode gap of 5mm and residual gas pressure of ∼5x10 -5 Torr. The carbon-fibre cathode, fabricated in this way, provides more stable current collection of an electron beam (without oscillations) than other cathodes

  9. Simultaneous study of sputtering and secondary ion emission of binary Fe-based alloys

    International Nuclear Information System (INIS)

    Riadel, M.M.; Nenadovic, T.; Perovic, B.

    1976-01-01

    The sputtering and secondary ion emission of binary Fe-based alloys of simple phase diagrams have been studied simultaneously. A series FeNi and FeCr alloys in the concentration range of 0-100% have been bombarded by 4 keV Kr + ions in a secondary ion mass spectrometer. The composition of the secondary ions has been analysed and also a fraction of the sputtered material has been collected and analysed by electron microprobe. The surface topography of the etched samples has been studied by scanning electron microscope. The relative sputtering coefficients of the metals have been determined, and the preferential sputtering of the alloying component of lower S have been proved. The etching pictures of samples are in correlation with the sputtering rates. Also the degree of secondary ionization has been calculated from the simultaneously measured ion emission and sputtering data. α + shows the change in the concentration range of the melting point minimum. This fact emphasizes the connection between the physico-chemical properties of alloys and their secondary emission process. From the dependence of the emitted homo- and hetero-cluster ions, conclusions could be shown concerning the production mechanism of small metallic aggregates

  10. Changes in agricultural carbon emissions and factors that influence agricultural carbon emissions based on different stages in Xinjiang, China.

    Science.gov (United States)

    Xiong, Chuanhe; Yang, Degang; Xia, Fuqiang; Huo, Jinwei

    2016-11-10

    Xinjiang's agricultural carbon emissions showed three stages of change, i.e., continued to rise, declined and continued to rise, during 1991-2014. The agriculture belonged to the "low emissions and high efficiency" agriculture category, with a lower agricultural carbon emission intensity. By using the logarithmic mean divisia index decomposition method, agricultural carbon emissions were decomposed into an efficiency factor, a structure factor, an economy factor, and a labour factor. We divided the study period into five stages based on the changes in efficiency factor and economy factor. Xinjiang showed different agricultural carbon emission characteristics at different stages. The degree of impact on agricultural carbon emissions at these stages depended on the combined effect of planting-animal husbandry carbon intensity and agricultural labour productivity. The economy factor was the critical factor to promote the increase in agricultural carbon emissions, while the main inhibiting factor for agricultural carbon emissions was the efficiency factor. The labour factor became more and more obvious in increasing agricultural carbon emissions. Finally, we discuss policy recommendations in terms of the main factors, including the development of agricultural science and technology (S&T), the establishment of three major mechanisms and transfer of rural labour in ethnic areas.

  11. Changes in agricultural carbon emissions and factors that influence agricultural carbon emissions based on different stages in Xinjiang, China

    Science.gov (United States)

    Xiong, Chuanhe; Yang, Degang; Xia, Fuqiang; Huo, Jinwei

    2016-01-01

    Xinjiang’s agricultural carbon emissions showed three stages of change, i.e., continued to rise, declined and continued to rise, during 1991–2014. The agriculture belonged to the “low emissions and high efficiency” agriculture category, with a lower agricultural carbon emission intensity. By using the logarithmic mean divisia index decomposition method, agricultural carbon emissions were decomposed into an efficiency factor, a structure factor, an economy factor, and a labour factor. We divided the study period into five stages based on the changes in efficiency factor and economy factor. Xinjiang showed different agricultural carbon emission characteristics at different stages. The degree of impact on agricultural carbon emissions at these stages depended on the combined effect of planting-animal husbandry carbon intensity and agricultural labour productivity. The economy factor was the critical factor to promote the increase in agricultural carbon emissions, while the main inhibiting factor for agricultural carbon emissions was the efficiency factor. The labour factor became more and more obvious in increasing agricultural carbon emissions. Finally, we discuss policy recommendations in terms of the main factors, including the development of agricultural science and technology (S&T), the establishment of three major mechanisms and transfer of rural labour in ethnic areas. PMID:27830739

  12. Knowledge-based automated radiopharmaceutical manufacturing for Positron Emission Tomography

    International Nuclear Information System (INIS)

    Alexoff, D.L.

    1991-01-01

    This article describes the application of basic knowledge engineering principles to the design of automated synthesis equipment for radiopharmaceuticals used in Positron Emission Tomography (PET). Before discussing knowledge programming, an overview of the development of automated radiopharmaceutical synthesis systems for PET will be presented. Since knowledge systems will rely on information obtained from machine transducers, a discussion of the uses of sensory feedback in today's automated systems follows. Next, the operation of these automated systems is contrasted to radiotracer production carried out by chemists, and the rationale for and basic concepts of knowledge-based programming are explained. Finally, a prototype knowledge-based system supporting automated radiopharmaceutical manufacturing of 18FDG at Brookhaven National Laboratory (BNL) is described using 1stClass, a commercially available PC-based expert system shell

  13. Advanced exergy-based analyses applied to a system including LNG regasification and electricity generation

    Energy Technology Data Exchange (ETDEWEB)

    Morosuk, Tatiana; Tsatsaronis, George; Boyano, Alicia; Gantiva, Camilo [Technische Univ. Berlin (Germany)

    2012-07-01

    Liquefied natural gas (LNG) will contribute more in the future than in the past to the overall energy supply in the world. The paper discusses the application of advanced exergy-based analyses to a recently developed LNG-based cogeneration system. These analyses include advanced exergetic, advanced exergoeconomic, and advanced exergoenvironmental analyses in which thermodynamic inefficiencies (exergy destruction), costs, and environmental impacts have been split into avoidable and unavoidable parts. With the aid of these analyses, the potentials for improving the thermodynamic efficiency and for reducing the overall cost and the overall environmental impact are revealed. The objectives of this paper are to demonstrate (a) the potential for generating electricity while regasifying LNG and (b) some of the capabilities associated with advanced exergy-based methods. The most important subsystems and components are identified, and suggestions for improving them are made. (orig.)

  14. Positron emission tomography study of pindolol occupancy of 5-HT{sub 1A} receptors in humans: preliminary analyses

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Diana; Mawlawi, Osama; Hwang, Dah-Ren; Kent, Justine; Simpson, Norman; Parsey, Ramin V.; Hashimoto, Tomoki; Slifstein, Mark; Huang Yiyun; Heertum, Ronald van; Abi-Dargham, Anissa; Caltabiano, Stephen; Malizia, Andrea; Cowley, Hugh; Mann, J. John; Laruelle, Marc

    2000-07-01

    Preclinical studies in rodents suggest that augmentation of serotonin reuptake inhibitors (SSRIs) therapy by the 5-hydroxytryptamine{sub 1A} (5-HT{sub 1A}) receptor agent pindolol might reduce the delay between initiation of treatment and antidepressant response. This hypothesis is based on the ability of pindolol to potentiate the increase in serotonin (5-HT) transmission induced by SSRIs, an effect achieved by blockade of the 5-HT{sub 1A} autoreceptors in the dorsal raphe nuclei (DRN). However, placebo-controlled clinical studies of pindolol augmentation of antidepressant therapy have reported inconsistent results. Here, we evaluated the occupancy of 5-HT{sub 1A} receptors following treatment with controlled release pindolol in nine healthy volunteers with positron-emission tomography (PET). Each subject was studied four times: at baseline (scan 1), following 1 week of oral administration of pindolol CR (7.5 mg/day) at peak level, 4 h after the dose (scan 2), and at 10 h following the dose (scan 3), and following one dose of pindolol CR (30 mg) (at peak level, 4 h) (scan 4). Pindolol occupancy of 5-HT{sub 1A} receptors was evaluated in the DRN and cortical regions as the decrease in binding potential (BP) of the radiolabelled selective 5-HT{sub 1A} antagonist [carbonyl-{sup 11}C]WAY-100635 or [carbonyl-{sup 11}C] N-(2-(4-(2-methoxyphenyl)-1-piperazinyl)ethyl)-N-(2-pyridyl) cyclohexanecarboxamide abbreviated as [{sup 11}C]WAY-100635. Pindolol dose-dependently decreased [{sup 11}C]WAY-100635 BP. Combining all the regions, occupancy was 20 {+-} 8% at scan 2, 14 {+-} 8% at scan 3, and 44 {+-} 8% at scan 4. The results of this study suggest that at doses used in clinical studies of augmentation of the SSRI effect by pindolol (2.5 mg t.i.d.), the occupancy of 5-HT{sub 1A} receptors is moderate and highly variable between subjects. This factor might explain the variable results obtained in clinical studies. On the other hand, at each dose tested, pindolol occupancy of 5

  15. Current analyses of particle emission in Ne-nucleus collisions between 400 and 800 MeV per nucleon

    International Nuclear Information System (INIS)

    Poitou, J.; Babinet, R.; Cavata, C.; Alard, J.P.; Augerat, J.; Bastid, N.; Dupieux, P.; Fraysse, L.; Montarou, G.; Parizet, M.J.; Brochard, F.; Gorodetzky, P.; Racca, C.

    1989-01-01

    We present preliminary data from neon induced reactions at 400 to 800 Mev per nucleon on various targets ranging from NaF to Pb, measured with the 4π detector Diogene at Saturne. Multiplicity selected events are studied for their rapidity and transverse energy distribution behaviour. The proton data cannot be understood in terms of emission by a single thermalized source but rather suggest the contribution of a second source which is colder than the fireball. The pions, which cannot be emitted by a cold source, behave qualitatively as expected from a thermal emission process, and might thus be a much better thermometer than the protons for the fireball. The production of delta resonances in non peripheral collisions is clearly stated

  16. Carbon emissions trading scheme exploration in China: A multi-agent-based model

    International Nuclear Information System (INIS)

    Tang, Ling; Wu, Jiaqian; Yu, Lean; Bao, Qin

    2015-01-01

    To develop a low-carbon economy, China launched seven pilot programs for carbon emissions trading (CET) in 2011 and plans to establish a nationwide CET mechanism in 2015. This paper formulated a multi-agent-based model to investigate the impacts of different CET designs in order to find the most appropriate one for China. The proposed bottom-up model includes all main economic agents in a general equilibrium framework. The simulation results indicate that (1) CET would effectively reduce carbon emissions, with a certain negative impact on the economy, (2) as for allowance allocation, the grandfathering rule is relatively moderate, while the benchmarking rule is more aggressive, (3) as for the carbon price, when the price level in the secondary CET market is regulated to be around RMB 40 per metric ton, a satisfactory emission mitigation effect can be obtained, (4) the penalty rate is suggested to be carefully designed to balance the economy development and mitigation effect, and (5) subsidy policy for energy technology improvement can effectively reduce carbon emissions without an additional negative impact on the economy. The results also indicate that the proposed novel model is a promising tool for CET policy making and analyses. -- Highlights: •A multi-agent-based model is proposed for carbon emissions trading (CET) in China. •Three agents are included: government, firms in different sectors and households. •The impacts of CET on the economy and environment in China are analyzed. •Different CET designs are simulated to find an appropriate policy for China. •Results confirm the effectiveness of the model and give helpful insights into CET design

  17. MODELING ATMOSPHERIC EMISSION FOR CMB GROUND-BASED OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Errard, J.; Borrill, J. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Ade, P. A. R. [School of Physics and Astronomy, Cardiff University, Cardiff CF10 3XQ (United Kingdom); Akiba, Y.; Chinone, Y. [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801 (Japan); Arnold, K.; Atlas, M.; Barron, D.; Elleflot, T. [Department of Physics, University of California, San Diego, CA 92093-0424 (United States); Baccigalupi, C.; Fabbian, G. [International School for Advanced Studies (SISSA), Trieste I-34014 (Italy); Boettger, D. [Department of Astronomy, Pontifica Universidad Catolica de Chile (Chile); Chapman, S. [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, NS, B3H 4R2 (Canada); Cukierman, A. [Department of Physics, University of California, Berkeley, CA 94720 (United States); Delabrouille, J. [AstroParticule et Cosmologie, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité (France); Dobbs, M.; Gilbert, A. [Physics Department, McGill University, Montreal, QC H3A 0G4 (Canada); Ducout, A.; Feeney, S. [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom); Feng, C. [Department of Physics and Astronomy, University of California, Irvine (United States); and others

    2015-08-10

    Atmosphere is one of the most important noise sources for ground-based cosmic microwave background (CMB) experiments. By increasing optical loading on the detectors, it amplifies their effective noise, while its fluctuations introduce spatial and temporal correlations between detected signals. We present a physically motivated 3D-model of the atmosphere total intensity emission in the millimeter and sub-millimeter wavelengths. We derive a new analytical estimate for the correlation between detectors time-ordered data as a function of the instrument and survey design, as well as several atmospheric parameters such as wind, relative humidity, temperature and turbulence characteristics. Using an original numerical computation, we examine the effect of each physical parameter on the correlations in the time series of a given experiment. We then use a parametric-likelihood approach to validate the modeling and estimate atmosphere parameters from the polarbear-i project first season data set. We derive a new 1.0% upper limit on the linear polarization fraction of atmospheric emission. We also compare our results to previous studies and weather station measurements. The proposed model can be used for realistic simulations of future ground-based CMB observations.

  18. Direct nitrous oxide emissions in Mediterranean climate cropping systems : Emission factors based on a meta-analysis of available measurement data

    NARCIS (Netherlands)

    Cayuela, Maria L.; Aguilera, Eduardo; Sanz-Cobena, Alberto; Adams, Dean C.; Abalos, Diego; Barton, Louise; Ryals, Rebecca; Silver, Whendee L.; Alfaro, Marta A.; Pappa, Valentini A.; Smith, Pete; Garnier, Josette; Billen, Gilles; Bouwman, Lex; Bondeau, Alberte; Lassaletta, Luis

    2017-01-01

    Many recent reviews and meta-analyses of N2O emissions do not include data from Mediterranean studies. In this paper we present a meta-analysis of the N2O emissions from Mediterranean cropping systems, and propose a more robust and reliable regional emission factor (EF) for N2O, distinguishing the

  19. Optical spectroscopy, 1.06μm emission properties of Nd3+-doped phosphate based glasses.

    Science.gov (United States)

    Sk Nayab, Rasool; T, Sasikala; A, Mohan Babu; L, Rama Moorthy; C K, Jayasankar

    2017-06-05

    Neodymium doped phosphate based glasses with composition of (P 2 O 5 +K 2 O+Al 2 O 3 +CaF 2 ) were prepared. The samples were analysed through differential thermal analysis (DTA), Fourier transform infrared (FTIR), absorption, emission and decay measurements. Judd-Ofelt parameters (Ω λ ) have been determined from the spectral intensities of absorption bands in order to calculate the radiative parameters like radiative transition probabilities (A R ), radiative lifetime (τ R ) and branching ratios (β R ) for the 4 F 3/2 → 4 I 11/2 laser transition of Nd 3+ ion. The effective emission bandwidths (Δλ eff ), experimental branching ratios (β exp ) and stimulated emission cross-sections (σ e ) have been determined from the emission spectrum. The decay curves of the 4 F 3/2 level exhibited almost single exponential nature for all the Nd 3+ ion concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Analysis of nickel-base alloys by Grimm-type glow discharge emission and x-ray fluorescence spectrometry

    International Nuclear Information System (INIS)

    Ferreira, N.P.; Strauss, J.A.; Van Maarseveen, I.; Ivanfy, A.B.

    1985-01-01

    Nickel-base alloys can be analysed as satisfactorily as steels by XRF as well as by the Grimm-type source, in spite of problems caused by element combinations, spectral line overlap and the influence of the structure and heat conduction properties on sputtering in the glow discharge source. This extended abstract briefly discusses the use of Grimm-type glow discharge emission and XRF as techniques for the analysis of nickel-base alloys

  1. GIS-based Approaches to Catchment Area Analyses of Mass Transit

    DEFF Research Database (Denmark)

    Andersen, Jonas Lohmann Elkjær; Landex, Alex

    2009-01-01

    Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...

  2. Meta-Analyses of Human Cell-Based Cardiac Regeneration Therapies

    DEFF Research Database (Denmark)

    Gyöngyösi, Mariann; Wojakowski, Wojciech; Navarese, Eliano P

    2016-01-01

    In contrast to multiple publication-based meta-analyses involving clinical cardiac regeneration therapy in patients with recent myocardial infarction, a recently published meta-analysis based on individual patient data reported no effect of cell therapy on left ventricular function or clinical...

  3. Employing a CGE model in analysing the environmental and economy-wide impacts of CO2 emission abatement policies in Malaysia.

    Science.gov (United States)

    Yahoo, Masoud; Othman, Jamal

    2017-04-15

    The impact of global warming has received much international attention in recent decades. To meet climate-change mitigation targets, environmental policy instruments have been designed to transform the way goods and services are produced as well as alter consumption patterns. The government of Malaysia is strongly committed to reducing CO 2 gas emissions as a proportion of GDP by 40% from 2005 levels by the year 2020. This study evaluates the economy-wide impacts of implementing two different types of CO 2 emission abatement policies in Malaysia using market-based (imposing a carbon tax) and command-and-control mechanism (sectoral emission standards). The policy simulations conducted involve the removal of the subsidy on petroleum products by the government. A carbon emission tax in conjunction with the revenue neutrality assumption is seen to be more effective than a command-and-control policy as it provides a double dividend. This is apparent as changes in consumption patterns lead to welfare enhancements while contributing to reductions in CO 2 emissions. The simulation results show that the production of renewable energies is stepped up when the imposition of carbon tax and removal of the subsidy is augmented by revenue recycling. This study provides an economy-wide assessment that compares two important tools for assisting environment policy makers evaluate carbon emission abatement initiatives in Malaysia. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Impacts of EU carbon emission trade directive on energy-intensive industries. Indicative micro-economic analyses

    International Nuclear Information System (INIS)

    Lund, Peter

    2007-01-01

    The cost impacts from the European emission trading system (ETS) on energy-intensive manufacturing industries have been investigated. The effects consist of direct costs associated to the CO 2 reduction requirements stated in the EU Directive, and of indirect costs of comparable magnitude that originate from a higher electricity price triggered by the ETS in the power sector. The total cost impacts remain below 2% of the production value for most industries within the ETS in the Kyoto period. In the post-Kyoto phase assuming a 30% CO 2 reduction, the total cost impact may raise up to 8% of production value in the heaviest industry sectors. In steel and cement industries the cost impacts are 3-4 fold compared to the least affected pulp and paper and oil refining. Electricity-intensive industries outside the ETS will also be affected, for example in aluminum and chlorine production the indirect cost impacts from ETS could come up to 10% of production value already in the Kyoto period. As industry sectors are affected differently by the ETS some correcting mechanisms may be worthwhile to consider in securing the operation of the most electricity-intensive sectors, e.g. balancing taxation schemes that may include as income source a levy on the wind-fall profits of the power sector due to ETS. A future improvement in ETS for industries within the scheme could be scaling of the emission reduction requirement so that the relative total emission reduction costs are at about the same level. (author)

  5. Particle filtering based structural assessment with acoustic emission sensing

    Science.gov (United States)

    Yan, Wuzhao; Abdelrahman, Marwa; Zhang, Bin; Ziehl, Paul

    2017-02-01

    Nuclear structures are designed to withstand severe loading events under various stresses. Over time, aging of structural systems constructed with concrete and steel will occur. This deterioration may reduce service life of nuclear facilities and/or lead to unnecessary or untimely repairs. Therefore, online monitoring of structures in nuclear power plants and waste storage has drawn significant attention in recent years. Of many existing non-destructive evaluation and structural monitoring approaches, acoustic emission is promising for assessment of structural damage because it is non-intrusive and is sensitive to corrosion and crack growth in reinforced concrete elements. To provide a rapid, actionable, and graphical means for interpretation Intensity Analysis plots have been developed. This approach provides a means for classification of damage. Since the acoustic emission measurement is only an indirect indicator of structural damage, potentially corrupted by non-genuine data, it is more suitable to estimate the states of corrosion and cracking in a Bayesian estimation framework. In this paper, we will utilize the accelerated corrosion data from a specimen at the University of South Carolina to develop a particle filtering-based diagnosis and prognosis algorithm. Promising features of the proposed algorithm are described in terms of corrosion state estimation and prediction of degradation over time to a predefined threshold.

  6. Silicon-based metallic micro grid for electron field emission

    International Nuclear Information System (INIS)

    Kim, Jaehong; Jeon, Seok-Gy; Kim, Jung-Il; Kim, Geun-Ju; Heo, Duchang; Shin, Dong Hoon; Sun, Yuning; Lee, Cheol Jin

    2012-01-01

    A micro-scale metal grid based on a silicon frame for application to electron field emission devices is introduced and experimentally demonstrated. A silicon lattice containing aperture holes with an area of 80 × 80 µm 2 and a thickness of 10 µm is precisely manufactured by dry etching the silicon on one side of a double-polished silicon wafer and by wet etching the opposite side. Because a silicon lattice is more rigid than a pure metal lattice, a thin layer of Au/Ti deposited on the silicon lattice for voltage application can be more resistant to the geometric stress caused by the applied electric field. The micro-fabrication process, the images of the fabricated grid with 88% geometric transparency and the surface profile measurement after thermal feasibility testing up to 700 °C are presented. (paper)

  7. Prediction on carbon dioxide emissions based on fuzzy rules

    Science.gov (United States)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  8. Scenario-based analyses of energy system development and its environmental implications in Thailand

    International Nuclear Information System (INIS)

    Shrestha, Ram M.; Malla, Sunil; Liyanage, Migara H.

    2007-01-01

    Thailand is one of the fastest growing energy-intensive economies in Southeast Asia. To formulate sound energy policies in the country, it is important to understand the impact of energy use on the environment over the long-period. This study examines energy system development and its associated greenhouse gas and local air pollutant emissions under four scenarios in Thailand through the year 2050. The four scenarios involve different growth paths for economy, population, energy efficiency and penetration of renewable energy technologies. The paper assesses the changes in primary energy supply mix, sector-wise final energy demand, energy import dependency and CO 2 , SO 2 and NO x emissions under four scenarios using end-use based Asia-Pacific Integrated Assessment Model (AIM/Enduse) of Thailand. (author)

  9. A New Acoustic Emission Sensor Based Gear Fault Detection Approach

    Directory of Open Access Journals (Sweden)

    Junda Zhu

    2013-01-01

    Full Text Available In order to reduce wind energy costs, prognostics and health management (PHM of wind turbine is needed to ensure the reliability and availability of wind turbines. A gearbox is an important component of a wind turbine. Therefore, developing effective gearbox fault detection tools is important to the PHM of wind turbine. In this paper, a new acoustic emission (AE sensor based gear fault detection approach is presented. This approach combines a heterodyne based frequency reduction technique with time synchronous average (TSA and spectrum kurtosis (SK to process AE sensor signals and extract features as condition indictors for gear fault detection. Heterodyne technique commonly used in communication is first employed to preprocess the AE signals before sampling. By heterodyning, the AE signal frequency is down shifted from several hundred kHz to below 50 kHz. This reduced AE signal sampling rate is comparable to that of vibration signals. The presented approach is validated using seeded gear tooth crack fault tests on a notational split torque gearbox. The approach presented in this paper is physics based and the validation results have showed that it could effectively detect the gear faults.

  10. Goldtraces on wedge-shaped artefacts from late neolithic of south Scandinavia analysed by proton induced x-ray emission spectroscopy

    International Nuclear Information System (INIS)

    Ahlberg, M.; Akselsson, R.; Forkman, B.; Rausing, G.

    1975-01-01

    Visible coloured traces on the surface of two selected wedge-shaped artefacts (pendants) of slate from the late Neolithic of South Scandinavia was analysed by means of proton-induced x-ray emission spectroscopy (PIXE). PIXE is shown to be a feasible tool in investigating surface layers of archeological significance. Three different gold-silver alloys was found on the two pendants. The results indicate that we shall have to reconsider the general accepted theories on the economic basis of the early Bronze Age in the area. (author)

  11. VOCs emission characteristics and priority control analysis based on VOCs emission inventories and ozone formation potentials in Zhoushan

    Science.gov (United States)

    Wang, Qiaoli; Li, Sujing; Dong, Minli; Li, Wei; Gao, Xiang; Ye, Rongmin; Zhang, Dongxiao

    2018-06-01

    Zhoushan is an island city with booming tourism and service industry, but also has many developed VOCs and/or NOX emission industries. It is necessary to carry out regional VOCs and O3 pollution control in Zhoushan as the only new area owns the provincial economic and social administration rights. Anthropogenic VOCs emission inventories were built based on emission factor method and main emission sources were identified according to the emission inventories. Then, localized VOCs source profiles were built based on in-site sampling and referring to other studies. Furthermore, ozone formation potentials (OFPs) profiles were built through VOCs source profiles and maximum incremental reactivity (MIR) theory. At last, the priority control analysis results showed that industrial processes, especially surface coating, are the key of VOCs and O3 control. Alkanes were the most emitted group, accounting for 58.67%, while aromatics contributed the most to ozone production accounting for 69.97% in total OFPs. n-butane, m/p-xylene, i-pentane, n-decane, toluene, propane, n-undecane, o-xylene, methyl cyclohexane and ethyl benzene were the top 10 VOC species that should be preferentially controlled for VOCs emission control. However, m/p-xylene, o-xylene, ethylene, n-butane, toluene, propene, 1,2,4-trimethyl benzene, 1,3,5-trimethyl benzene, ethyl benzene and 1,2,3-trimethyl benzene were the top 10 VOC species that required preferential control for O3 pollution control.

  12. Development and Application of a Life Cycle-Based Model to Evaluate Greenhouse Gas Emissions of Oil Sands Upgrading Technologies.

    Science.gov (United States)

    Pacheco, Diana M; Bergerson, Joule A; Alvarez-Majmutov, Anton; Chen, Jinwen; MacLean, Heather L

    2016-12-20

    A life cycle-based model, OSTUM (Oil Sands Technologies for Upgrading Model), which evaluates the energy intensity and greenhouse gas (GHG) emissions of current oil sands upgrading technologies, is developed. Upgrading converts oil sands bitumen into high quality synthetic crude oil (SCO), a refinery feedstock. OSTUM's novel attributes include the following: the breadth of technologies and upgrading operations options that can be analyzed, energy intensity and GHG emissions being estimated at the process unit level, it not being dependent on a proprietary process simulator, and use of publicly available data. OSTUM is applied to a hypothetical, but realistic, upgrading operation based on delayed coking, the most common upgrading technology, resulting in emissions of 328 kg CO 2 e/m 3 SCO. The primary contributor to upgrading emissions (45%) is the use of natural gas for hydrogen production through steam methane reforming, followed by the use of natural gas as fuel in the rest of the process units' heaters (39%). OSTUM's results are in agreement with those of a process simulation model developed by CanmetENERGY, other literature, and confidential data of a commercial upgrading operation. For the application of the model, emissions are found to be most sensitive to the amount of natural gas utilized as feedstock by the steam methane reformer. OSTUM is capable of evaluating the impact of different technologies, feedstock qualities, operating conditions, and fuel mixes on upgrading emissions, and its life cycle perspective allows easy incorporation of results into well-to-wheel analyses.

  13. Basing assessment and treatment of problem behavior on behavioral momentum theory: Analyses of behavioral persistence.

    Science.gov (United States)

    Schieltz, Kelly M; Wacker, David P; Ringdahl, Joel E; Berg, Wendy K

    2017-08-01

    The connection, or bridge, between applied and basic behavior analysis has been long-established (Hake, 1982; Mace & Critchfield, 2010). In this article, we describe how clinical decisions can be based more directly on behavioral processes and how basing clinical procedures on behavioral processes can lead to improved clinical outcomes. As a case in point, we describe how applied behavior analyses of maintenance, and specifically the long-term maintenance of treatment effects related to problem behavior, can be adjusted and potentially enhanced by basing treatment on Behavioral Momentum Theory. We provide a brief review of the literature including descriptions of two translational studies that proposed changes in how differential reinforcement of alternative behavior treatments are conducted based on Behavioral Momentum Theory. We then describe current clinical examples of how these translations are continuing to impact the definitions, designs, analyses, and treatment procedures used in our clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Understanding NOx emission trends in China based on OMI observations

    Science.gov (United States)

    Wang, Y.; Ga, D.; Smeltzer, C. D.; Yi, R.; Liu, Z.

    2012-12-01

    We analyze OMI observations of NO2 columns over China from 2005 to 2010. Simulations using a regional 3-D chemical transport model (REAM) are used to derive the top-down anthropogenic NOx emissions. The Kendall method is then applied to derive the emission trend. The emission trend is affected by the economic slowdown in 2009. After removing the effect of one year abnormal data, the overall emission trend is 4.35±1.42% per year, which is slower than the linear-regression trend of 5.8-10.8% per year reported for previous years. We find large regional, seasonal, and urban-rural variations in emission trend. The annual emission trends of Northeast China, Central China Plain, Yangtze River Delta and Pearl River Delta are 44.98±1.39%, 5.24±1.63%, 3.31±1.02% and -4.02±1.87%, respectively. The annual emission trends of four megacities, Beijing, Shanghai, Guangzhou and Shenzhen are 0.7±0.27%, -0.75±0.31%, -4.08±1.21% and -6.22±2.85%,, considerably lower than the regional averages. These results appear to suggest that a number of factors, including migration of high-emission industries, vehicle emission regulations, emission control measures of thermal power plants, increased hydro-power usage, have reduced or reversed the increasing trend of NOx emissions in more economically developed megacities and southern coastal regions.

  15. PCA-based algorithm for calibration of spectrophotometric analysers of food

    International Nuclear Information System (INIS)

    Morawski, Roman Z; Miekina, Andrzej

    2013-01-01

    Spectrophotometric analysers of food, being instruments for determination of the composition of food products and ingredients, are today of growing importance for food industry, as well as for food distributors and consumers. Their metrological performance significantly depends of the numerical performance of available means for spectrophotometric data processing; in particular – the means for calibration of analysers. In this paper, a new algorithm for this purpose is proposed, viz. the algorithm using principal components analysis (PCA). It is almost as efficient as PLS-based algorithms of calibration, but much simpler

  16. A Server-Client-Based Graphical Development Environment for Physics Analyses (VISPA)

    International Nuclear Information System (INIS)

    Bretz, H-P; Erdmann, M; Fischer, R; Hinzmann, A; Klingebiel, D; Komm, M; Müller, G; Rieger, M; Steffens, J; Steggemann, J; Urban, M; Winchen, T

    2012-01-01

    The Visual Physics Analysis (VISPA) project provides a graphical development environment for data analysis. It addresses the typical development cycle of (re-)designing, executing, and verifying an analysis. We present the new server-client-based web application of the VISPA project to perform physics analyses via a standard internet browser. This enables individual scientists to work with a large variety of devices including touch screens, and teams of scientists to share, develop, and execute analyses on a server via the web interface.

  17. Ventilation/perfusion SPECT/CT in patients with pulmonary emphysema. Evaluation of software-based analysing.

    Science.gov (United States)

    Schreiter, V; Steffen, I; Huebner, H; Bredow, J; Heimann, U; Kroencke, T J; Poellinger, A; Doellinger, F; Buchert, R; Hamm, B; Brenner, W; Schreiter, N F

    2015-01-01

    The purpose of this study was to evaluate the reproducibility of a new software based analysing system for ventilation/perfusion single-photon emission computed tomography/computed tomography (V/P SPECT/CT) in patients with pulmonary emphysema and to compare it to the visual interpretation. 19 patients (mean age: 68.1 years) with pulmonary emphysema who underwent V/P SPECT/CT were included. Data were analysed by two independent observers in visual interpretation (VI) and by software based analysis system (SBAS). SBAS PMOD version 3.4 (Technologies Ltd, Zurich, Switzerland) was used to assess counts and volume per lung lobe/per lung and to calculate the count density per lung, lobe ratio of counts and ratio of count density. VI was performed using a visual scale to assess the mean counts per lung lobe. Interobserver variability and association for SBAS and VI were analysed using Spearman's rho correlation coefficient. Interobserver agreement correlated highly in perfusion (rho: 0.982, 0.957, 0.90, 0.979) and ventilation (rho: 0.972, 0.924, 0.941, 0.936) for count/count density per lobe and ratio of counts/count density in SBAS. Interobserver agreement correlated clearly for perfusion (rho: 0.655) and weakly for ventilation (rho: 0.458) in VI. SBAS provides more reproducible measures than VI for the relative tracer uptake in V/P SPECT/CTs in patients with pulmonary emphysema. However, SBAS has to be improved for routine clinical use.

  18. Carbon dioxide emissions from non-energy use of fossil fuels. Summary of key issues and conclusions from the country analyses

    International Nuclear Information System (INIS)

    Patel, Martin; Neelis, Maarten; Gielen, Dolf; Olivier, Jos; Simmons, Tim; Theunis, Jan

    2005-01-01

    The non-energy use of fossil fuels is a source of carbon dioxide (CO 2 ) emissions that is not negligible and has been increasing substantially in the last three decades. Current emission estimates for this source category are subject to major uncertainties. One important reason is that non-energy use as published in energy statistics is not defined in a consistent manner, rendering calculation results based on these data incomparable across countries (concerns in particular the Intergovernmental Panel on Climate Change (IPCC) Reference Approach). Further reasons are the complexity and interlinkage of the energy and material flows in the chemical/petrochemical sector and the current use of storage fractions as default values in the IPCC Reference Approach, which are based on a different definition of storage and refer to other flows than those available from energy statistics. Several other shortcomings of the IPCC Reference Approach are identified in this paper, e.g. the fact that it neglects international trade of synthetic organic products. In order to improve emissions accounting, the Non-Energy Use and CO 2 Emissions (NEU-CO 2 ) network developed a model called Non-Energy Use Emission Accounting Tables (NEAT), which is based on Material Flow Analysis (MFA). The NEAT model and other MFA approaches have been applied to several countries. In this paper, the results for Italy, Japan, Korea, the Netherlands and the USA are compared with the values published in National Communications to the United Framework Convention on Climate Change (UNFCCC). It is shown that the international harmonisation of the data sources (energy statistics) and the methods applied would lead to substantially different emissions results for some countries, in the order of several percent. Moreover, the NEAT model and the other MFA have proved to be a valuable tool to identify errors in energy statistics. These results confirm the need for enhanced efforts to improve and harmonise energy

  19. Automatic image-based analyses using a coupled quadtree-SBFEM/SCM approach

    Science.gov (United States)

    Gravenkamp, Hauke; Duczek, Sascha

    2017-10-01

    Quadtree-based domain decomposition algorithms offer an efficient option to create meshes for automatic image-based analyses. Without introducing hanging nodes the scaled boundary finite element method (SBFEM) can directly operate on such meshes by only discretizing the edges of each subdomain. However, the convergence of a numerical method that relies on a quadtree-based geometry approximation is often suboptimal due to the inaccurate representation of the boundary. To overcome this problem a combination of the SBFEM with the spectral cell method (SCM) is proposed. The basic idea is to treat each uncut quadtree cell as an SBFEM polygon, while all cut quadtree cells are computed employing the SCM. This methodology not only reduces the required number of degrees of freedom but also avoids a two-dimensional quadrature in all uncut quadtree cells. Numerical examples including static, harmonic, modal and transient analyses of complex geometries are studied, highlighting the performance of this novel approach.

  20. Engineering design and exergy analyses for combustion gas turbine based power generation system

    International Nuclear Information System (INIS)

    Sue, D.-C.; Chuang, C.-C.

    2004-01-01

    This paper presents the engineering design and theoretical exergetic analyses of the plant for combustion gas turbine based power generation systems. Exergy analysis is performed based on the first and second laws of thermodynamics for power generation systems. The results show the exergy analyses for a steam cycle system predict the plant efficiency more precisely. The plant efficiency for partial load operation is lower than full load operation. Increasing the pinch points will decrease the combined cycle plant efficiency. The engineering design is based on inlet air-cooling and natural gas preheating for increasing the net power output and efficiency. To evaluate the energy utilization, one combined cycle unit and one cogeneration system, consisting of gas turbine generators, heat recovery steam generators, one steam turbine generator with steam extracted for process have been analyzed. The analytical results are used for engineering design and component selection

  1. Characterization of the emissions impacts of hybrid excavators with a portable emissions measurement system (PEMS)-based methodology.

    Science.gov (United States)

    Cao, Tanfeng; Russell, Robert L; Durbin, Thomas D; Cocker, David R; Burnette, Andrew; Calavita, Joseph; Maldonado, Hector; Johnson, Kent C

    2018-04-13

    Hybrid engine technology is a potentially important strategy for reduction of tailpipe greenhouse gas (GHG) emissions and other pollutants that is now being implemented for off-road construction equipment. The goal of this study was to evaluate the emissions and fuel consumption impacts of electric-hybrid excavators using a Portable Emissions Measurement System (PEMS)-based methodology. In this study, three hybrid and four conventional excavators were studied for both real world activity patterns and tailpipe emissions. Activity data was obtained using engine control module (ECM) and global positioning system (GPS) logged data, coupled with interviews, historical records, and video. This activity data was used to develop a test cycle with seven modes representing different types of excavator work. Emissions data were collected over this test cycle using a PEMS. The results indicated the HB215 hybrid excavator provided a significant reduction in tailpipe carbon dioxide (CO 2 ) emissions (from -13 to -26%), but increased diesel particulate matter (PM) (+26 to +27%) when compared to a similar model conventional excavator over the same duty cycle. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Combining rate-based and cap-and-trade emissions policies

    International Nuclear Information System (INIS)

    Fischer, Carolyn

    2003-12-01

    Rate-based emissions policies (like tradable performance standards, TPS) fix average emissions intensity, while cap-and-trade (CAT) policies fix total emissions. This paper shows that unfettered trade between rate-based and cap-and-trade programs always raises combined emissions, except when product markets are related in particular ways. Gains from trade are fully passed on to consumers in the rate-based sector, resulting in more output and greater emissions allocations. We consider several policy options to offset the expansion, including a tax, an 'exchange rate' to adjust for relative permit values, output-based allocation (OBA) for the rate-based sector, and tightening the cap. A range of combinations of tighter allocations could improve situations in both sectors with trade while holding emissions constant

  3. Greenhouse gas emissions accounting of urban residential consumption: a household survey based approach.

    Directory of Open Access Journals (Sweden)

    Tao Lin

    Full Text Available Devising policies for a low carbon city requires a careful understanding of the characteristics of urban residential lifestyle and consumption. The production-based accounting approach based on top-down statistical data has a limited ability to reflect the total greenhouse gas (GHG emissions from residential consumption. In this paper, we present a survey-based GHG emissions accounting methodology for urban residential consumption, and apply it in Xiamen City, a rapidly urbanizing coastal city in southeast China. Based on this, the main influencing factors determining residential GHG emissions at the household and community scale are identified, and the typical profiles of low, medium and high GHG emission households and communities are identified. Up to 70% of household GHG emissions are from regional and national activities that support household consumption including the supply of energy and building materials, while 17% are from urban level basic services and supplies such as sewage treatment and solid waste management, and only 13% are direct emissions from household consumption. Housing area and household size are the two main factors determining GHG emissions from residential consumption at the household scale, while average housing area and building height were the main factors at the community scale. Our results show a large disparity in GHG emissions profiles among different households, with high GHG emissions households emitting about five times more than low GHG emissions households. Emissions from high GHG emissions communities are about twice as high as from low GHG emissions communities. Our findings can contribute to better tailored and targeted policies aimed at reducing household GHG emissions, and developing low GHG emissions residential communities in China.

  4. Greenhouse Gas Emissions Accounting of Urban Residential Consumption: A Household Survey Based Approach

    Science.gov (United States)

    Lin, Tao; Yu, Yunjun; Bai, Xuemei; Feng, Ling; Wang, Jin

    2013-01-01

    Devising policies for a low carbon city requires a careful understanding of the characteristics of urban residential lifestyle and consumption. The production-based accounting approach based on top-down statistical data has a limited ability to reflect the total greenhouse gas (GHG) emissions from residential consumption. In this paper, we present a survey-based GHG emissions accounting methodology for urban residential consumption, and apply it in Xiamen City, a rapidly urbanizing coastal city in southeast China. Based on this, the main influencing factors determining residential GHG emissions at the household and community scale are identified, and the typical profiles of low, medium and high GHG emission households and communities are identified. Up to 70% of household GHG emissions are from regional and national activities that support household consumption including the supply of energy and building materials, while 17% are from urban level basic services and supplies such as sewage treatment and solid waste management, and only 13% are direct emissions from household consumption. Housing area and household size are the two main factors determining GHG emissions from residential consumption at the household scale, while average housing area and building height were the main factors at the community scale. Our results show a large disparity in GHG emissions profiles among different households, with high GHG emissions households emitting about five times more than low GHG emissions households. Emissions from high GHG emissions communities are about twice as high as from low GHG emissions communities. Our findings can contribute to better tailored and targeted policies aimed at reducing household GHG emissions, and developing low GHG emissions residential communities in China. PMID:23405187

  5. Coalescent-based genome analyses resolve the early branches of the euarchontoglires.

    Directory of Open Access Journals (Sweden)

    Vikas Kumar

    Full Text Available Despite numerous large-scale phylogenomic studies, certain parts of the mammalian tree are extraordinarily difficult to resolve. We used the coding regions from 19 completely sequenced genomes to study the relationships within the super-clade Euarchontoglires (Primates, Rodentia, Lagomorpha, Dermoptera and Scandentia because the placement of Scandentia within this clade is controversial. The difficulty in resolving this issue is due to the short time spans between the early divergences of Euarchontoglires, which may cause incongruent gene trees. The conflict in the data can be depicted by network analyses and the contentious relationships are best reconstructed by coalescent-based analyses. This method is expected to be superior to analyses of concatenated data in reconstructing a species tree from numerous gene trees. The total concatenated dataset used to study the relationships in this group comprises 5,875 protein-coding genes (9,799,170 nucleotides from all orders except Dermoptera (flying lemurs. Reconstruction of the species tree from 1,006 gene trees using coalescent models placed Scandentia as sister group to the primates, which is in agreement with maximum likelihood analyses of concatenated nucleotide sequence data. Additionally, both analytical approaches favoured the Tarsier to be sister taxon to Anthropoidea, thus belonging to the Haplorrhine clade. When divergence times are short such as in radiations over periods of a few million years, even genome scale analyses struggle to resolve phylogenetic relationships. On these short branches processes such as incomplete lineage sorting and possibly hybridization occur and make it preferable to base phylogenomic analyses on coalescent methods.

  6. THE GOAL OF VALUE-BASED MEDICINE ANALYSES: COMPARABILITY. THE CASE FOR NEOVASCULAR MACULAR DEGENERATION

    Science.gov (United States)

    Brown, Gary C.; Brown, Melissa M.; Brown, Heidi C.; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    Purpose To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). Methods A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Results Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy

  7. The goal of value-based medicine analyses: comparability. The case for neovascular macular degeneration.

    Science.gov (United States)

    Brown, Gary C; Brown, Melissa M; Brown, Heidi C; Kindermann, Sylvia; Sharma, Sanjay

    2007-01-01

    To evaluate the comparability of articles in the peer-reviewed literature assessing the (1) patient value and (2) cost-utility (cost-effectiveness) associated with interventions for neovascular age-related macular degeneration (ARMD). A search was performed in the National Library of Medicine database of 16 million peer-reviewed articles using the key words cost-utility, cost-effectiveness, value, verteporfin, pegaptanib, laser photocoagulation, ranibizumab, and therapy. All articles that used an outcome of quality-adjusted life-years (QALYs) were studied in regard to (1) percent improvement in quality of life, (2) utility methodology, (3) utility respondents, (4) types of costs included (eg, direct healthcare, direct nonhealthcare, indirect), (5) cost bases (eg, Medicare, National Health Service in the United Kingdom), and (6) study cost perspective (eg, government, societal, third-party insurer). To qualify as a value-based medicine analysis, the patient value had to be measured using the outcome of the QALYs conferred by respective interventions. As with value-based medicine analyses, patient-based time tradeoff utility analysis had to be utilized, patient utility respondents were necessary, and direct medical costs were used. Among 21 cost-utility analyses performed on interventions for neovascular macular degeneration, 15 (71%) met value-based medicine criteria. The 6 others (29%) were not comparable owing to (1) varying utility methodology, (2) varying utility respondents, (3) differing costs utilized, (4) differing cost bases, and (5) varying study perspectives. Among value-based medicine studies, laser photocoagulation confers a 4.4% value gain (improvement in quality of life) for the treatment of classic subfoveal choroidal neovascularization. Intravitreal pegaptanib confers a 5.9% value gain (improvement in quality of life) for classic, minimally classic, and occult subfoveal choroidal neovascularization, and photodynamic therapy with verteporfin confers

  8. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Examining air pollution in China using production- and consumption-based emissions accounting approaches.

    Science.gov (United States)

    Huo, Hong; Zhang, Qiang; Guan, Dabo; Su, Xin; Zhao, Hongyan; He, Kebin

    2014-12-16

    Two important reasons for China's air pollution are the high emission factors (emission per unit of product) of pollution sources and the high emission intensity (emissions per unit of GDP) of the industrial structure. Therefore, a wide variety of policy measures, including both emission abatement technologies and economic adjustment, must be implemented. To support such measures, this study used the production- and consumption-based emissions accounting approaches to simulate the SO2, NOx, PM2.5, and VOC emissions flows among producers and consumers. This study analyzed the emissions and GDP performance of 36 production sectors. The results showed that the equipment, machinery, and devices manufacturing and construction sectors contributed more than 50% of air pollutant emissions, and most of their products were used for capital formation and export. The service sector had the lowest emission intensities, and its output was mainly consumed by households and the government. In China, the emission intensities of production activities triggered by capital formation and export were approximately twice that of the service sector triggered by final consumption expenditure. This study suggests that China should control air pollution using the following strategies: applying end-of-pipe abatement technologies and using cleaner fuels to further decrease the emission factors associated with rural cooking, electricity generation, and the transportation sector; continuing to limit highly emission-intensive but low value-added exports; developing a plan to reduce construction activities; and increasing the proportion of service GDP in the national economy.

  10. Incentive-based regulation of CO2 emissions from international aviation

    International Nuclear Information System (INIS)

    Carlsson, F.; Hammar, H.

    2002-01-01

    We explore the possibilities of using incentive-based environmental regulations of CO 2 emissions from international civil aviation. In theory incentive-based instruments such as an emission charge or a tradable emission permit system are better regulations than so-called command-and-control regulations such as emission limits or technology standards. However, the implementation of these instruments is a complex issue. We therefore describe and discuss how an emission charge and a tradable emission permit system for international aviation should be designed in order to improve efficiency. We also compare these two types of regulations. In brief, we find that an emission charge and a tradable emission permit system in which the permits are auctioned have more or less the same characteristics. The main advantage of a tradable emission permit system is that the effect, in terms of emission reductions, is known. On the other hand, we show that under uncertainty an emission charge is preferred. The choice of regulation is a political decision and it does not seem likely that an environmental charge or a tradable emission permit system would be implemented without consideration of the costs of the regulation. Revenue-neutral charges or gratis distribution of permits would, for this reason, be realistic choices of regulations. However, such actions are likely to result in less stringent regulations and other negative welfare effects.(author)

  11. Optimization of digestion parameters for analysing the total sulphur of mine tailings by inductively coupled plasma optical emission spectrometry.

    Science.gov (United States)

    Alam, Raquibul; Shang, Julie Q; Cheng, Xiangrong

    2012-05-01

    The oxidation of sulphidic mine tailings and consequent acid generation poses challenges for the environment. Accurate and precise analysis of sulphur content is necessary for impact assessment and management of mine tailings. Here, the authors aim at developing a rapid and easy digestion procedure, which may analyse and measure the total amount of sulphur in mine tailings by using inductively coupled plasma. For evaluating effects of several variables, the researchers used a univariate (analysis of variance (ANOVA)) strategy and considered factors such as composition of the acid mixture, heating time, and refluxing device to optimize the performance. To do the experiment, the researchers have used two certified reference materials (KZK-1 and RTS-2) and samples of tailings from Musselwhite mine. ANOVA result shows that heating time is the most influencing factor on acid digestion of the reference materials whereas in case of a digestion of tailings sample, hydrochloric acid proved to be the most significant parameter. Satisfactory results between the measured and referenced values are found for all experiments. It is found that the aqua regia (1 ml HNO(3) + 3 ml HCl) digestion of 0.1 g of samples after only 40 min of heating at 95°C produced fast, safe, and accurate analytical results with a recovery of 97% for the selected reference materials.

  12. A Versatile Software Package for Inter-subject Correlation Based Analyses of fMRI

    Directory of Open Access Journals (Sweden)

    Jukka-Pekka eKauppi

    2014-01-01

    Full Text Available In the inter-subject correlation (ISC based analysis of the functional magnetic resonance imaging (fMRI data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modelling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine or Open Grid Scheduler and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/.

  13. A versatile software package for inter-subject correlation based analyses of fMRI.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi

    2014-01-01

    In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/

  14. Failure Mechanism of Rock Bridge Based on Acoustic Emission Technique

    Directory of Open Access Journals (Sweden)

    Guoqing Chen

    2015-01-01

    Full Text Available Acoustic emission (AE technique is widely used in various fields as a reliable nondestructive examination technology. Two experimental tests were carried out in a rock mechanics laboratory, which include (1 small scale direct shear tests of rock bridge with different lengths and (2 large scale landslide model with locked section. The relationship of AE event count and record time was analyzed during the tests. The AE source location technology and comparative analysis with its actual failure model were done. It can be found that whether it is small scale test or large scale landslide model test, AE technique accurately located the AE source point, which reflected the failure generation and expansion of internal cracks in rock samples. Large scale landslide model with locked section test showed that rock bridge in rocky slope has typical brittle failure behavior. The two tests based on AE technique well revealed the rock failure mechanism in rocky slope and clarified the cause of high speed and long distance sliding of rocky slope.

  15. Problem shifting in transport systems. Analysing and balancing unintended consequences of CO2 emission reduction in Dutch transport.

    NARCIS (Netherlands)

    Gebler, Malte

    2013-01-01

    Summary Transport systems face significant input- and output-related challenges in the upcoming decades. To tackle climate change – the major output challenge - an 80% CO2 reduction has to be achieved by 2050 (base year 1990). This requires a sustainabi

  16. Initial results of detected methane emissions from landfills in the Los Angeles Basin during the COMEX campaign by the Methane Airborne MAPper (MAMAP) instrument and a greenhouse gas in-situ analyser

    Science.gov (United States)

    Krautwurst, Sven; Gerilowski, Konstantin; Kolyer, Richard; Jonsson, Haflidi; Krings, Thomas; Horstjann, Markus; Leifer, Ira; Vigil, Sam; Buchwitz, Michael; Schüttemeyer, Dirk; Fladeland, Matthew M.; Burrows, John P.; Bovensmann, Heinrich

    2015-04-01

    Methane (CH4) is the second most important anthropogenic greenhouse gas beside carbon dioxide (CO2). Significant contributors to the global methane budget are fugitive emissions from landfills. Due to the growing world population, it is expected that the amount of waste and, therefore, waste disposal sites will increase in number and size in parts of the world, often adjacent growing megacities. Besides bottom-up modelling, a variety of ground based methods (e.g., flux chambers, trace gases, radial plume mapping, etc.) have been used to estimate (top-down) these fugitive emissions. Because landfills usually are large, sometimes with significant topographic relief, vary temporally, and leak/emit heterogeneously across their surface area, assessing total emission strength by ground-based techniques is often difficult. In this work, we show how airborne based remote sensing measurements of the column-averaged dry air mole fraction of CH4 can be utilized to estimate fugitive emissions from landfills in an urban environment by a mass balance approach. Subsequently, these emission rates are compared to airborne in-situ horizontal cross section measurements of CH4 taken within the planetary boundary layer (PBL) upwind and downwind of the landfill at different altitudes immediately after the remote sensing measurements were finished. Additional necessary parameters (e.g., wind direction, wind speed, aerosols, dew point temperature, etc.) for the data inversion are provided by a standard instrumentation suite for atmospheric measurements aboard the aircraft, and nearby ground-based weather stations. These measurements were part of the CO2 and Methane EXperiment (COMEX), which was executed during the summer 2014 in California and was co-funded by the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA). The remote sensing measurements were taken by the Methane Airborne MAPper (MAMAP) developed and operated by the University of Bremen and

  17. Emission of Isothiazolinones from Water-Based Paints

    DEFF Research Database (Denmark)

    Lundov, Michael D; Kolarik, Barbara; Bossi, Rossana

    2014-01-01

    were measured in climate chambers and in an apartment. Nineteen paints were analyzed for the content of MI, MCI, and BIT. All 19 paints contained MI, 16 contained BIT, and 4 contained MCI. In the chamber experiment emission of MI peaked within hours of application but then continued at a slow rate...... for more than 42 days. MCI was emitted more slowly and peaked after several days. BIT emissions were all around the limit of detection. In the apartment we were able to detect emission of MI several days after application. Long lasting evaporation and thus chronic exposure give credibility to the clinical...

  18. Analysing task design and students' responses to context-based problems through different analytical frameworks

    Science.gov (United States)

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  19. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  20. Forecasting Energy CO2 Emissions Using a Quantum Harmony Search Algorithm-Based DMSFE Combination Model

    Directory of Open Access Journals (Sweden)

    Xingsheng Gu

    2013-03-01

    Full Text Available he accurate forecasting of carbon dioxide (CO2 emissions from fossil fuel energy consumption is a key requirement for making energy policy and environmental strategy. In this paper, a novel quantum harmony search (QHS algorithm-based discounted mean square forecast error (DMSFE combination model is proposed. In the DMSFE combination forecasting model, almost all investigations assign the discounting factor (β arbitrarily since β varies between 0 and 1 and adopt one value for all individual models and forecasting periods. The original method doesn’t consider the influences of the individual model and the forecasting period. This work contributes by changing β from one value to a matrix taking the different model and the forecasting period into consideration and presenting a way of searching for the optimal β values by using the QHS algorithm through optimizing the mean absolute percent error (MAPE objective function. The QHS algorithm-based optimization DMSFE combination forecasting model is established and tested by forecasting CO2 emission of the World top‒5 CO2 emitters. The evaluation indexes such as MAPE, root mean squared error (RMSE and mean absolute error (MAE are employed to test the performance of the presented approach. The empirical analyses confirm the validity of the presented method and the forecasting accuracy can be increased in a certain degree.

  1. Data supporting the assessment of biomass based electricity and reduced GHG emissions in Cuba.

    Science.gov (United States)

    Sagastume Gutiérrez, Alexis; Cabello Eras, Juan J; Vandecasteele, Carlo; Hens, Luc

    2018-04-01

    Assessing the biomass based electricity potential of developing nations like Cuba can help to reduce the fossil fuels dependency and the greenhouse gas emissions. The data included in this study present the evolution of electricity production and greenhouse gas emissions in Cuba. Additionally, the potentialities to produce biomass based electricity by using the most significant biomass sources in Cuba are estimated. Furthermore, estimations of the potential reductions of greenhouse gas emissions, resulting from implementing the biomass based electricity potential of the different sources discussed in the study, are included. Results point to the most promising biomass sources for electricity generation and their potential to reduce GHG emissions.

  2. Efficient light emitting devices based on phosphorescent partially doped emissive layers

    KAUST Repository

    Yang, Xiaohui

    2013-05-29

    We report efficient organic light emitting devices employing an ultrathin phosphor emissive layer. The electroluminescent spectra of these devices can be tuned by introducing a low-energy emitting phosphor layer into the emission zone. Devices with the emissive layer consisting of multiple platinum-complex/spacer layer cells show a peak external quantum efficiency of 18.1%, which is among the best EQE values for platinum-complex based light emitting devices. Devices with an ultrathin phosphor emissive layer show stronger luminance decay with the operating time compared to the counterpart devices having a host-guest emissive layer.

  3. Estimation of effective block conductivities based on discrete network analyses using data from the Aespoe site

    International Nuclear Information System (INIS)

    La Pointe, P.R.; Wallmann, P.; Follin, S.

    1995-09-01

    Numerical continuum codes may be used for assessing the role of regional groundwater flow in far-field safety analyses of a nuclear waste repository at depth. The focus of this project is to develop and evaluate one method based on Discrete Fracture Network (DFN) models to estimate block-scale permeability values for continuum codes. Data from the Aespoe HRL and surrounding area are used. 57 refs, 76 figs, 15 tabs

  4. Comparison of integration options for gasification-based biofuel production systems – Economic and greenhouse gas emission implications

    International Nuclear Information System (INIS)

    Holmgren, Kristina M.; Berntsson, Thore S.; Andersson, Eva; Rydberg, Tomas

    2016-01-01

    The impact of different integration options for gasification-based biofuel production systems producing synthetic natural gas, methanol and FT (Fischer-Tropsch) fuels on the NAP (net annual profit), FPC (fuel production cost) and the GHG (greenhouse gas) emission reduction potential are analysed. The considered integration options are heat deliveries to DH (district heating) systems or to nearby industries and integration with infrastructure for CO_2 storage. The comparison is made to stand-alone configurations in which the excess heat is used for power production. The analysis considers future energy market scenarios and case studies in southwestern Sweden. The results show that integration with DH systems has small impacts on the NAP and the FPC and diverging (positive or negative) impacts on the GHG emissions. Integration with industries has positive effects on the economic and GHG performances in all scenarios. The FPCs are reduced by 7–8% in the methanol case and by 12–13% in the FT production case. The GHG emission reductions are strongly dependent on the reference power production. The storage of separated CO_2 shows an increase in the GHG emission reduction potential of 70–100% for all systems, whereas the impacts on the economic performances are strongly dependent on the CO_2_e-charge. - Highlights: • Three gasification-based biofuel production systems at case study sites are analysed. • Greenhouse gas emissions reduction potential and economic performance are evaluated. • Impact of integration with adjacent industry or district heating systems is analysed. • The assessment comprises future energy market scenarios including CCS infrastructure. • Utilisation options for excess heat significantly impact the evaluated parameters.

  5. A forward looking, actor based, indicator for climate gas emissions

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Torgeir; Randers, Joergen

    2011-04-15

    The most commonly used Norwegian indicator for climate change displays historical emissions and compare with Norway's Kyoto target. This indicator says little about future emissions, about the ongoing Norwegian effort to reduce climate gas emissions, or about its effect on sustainability. In this paper we propose an indicator that improves on these weaknesses. We present a forward looking climate indicator that in addition to historic data includes business as usual scenarios, different proposals for future domestic emissions, and national or international commitments and agreements. This indicator presents - in one graph - a broad diversity of views on how the climate challenge should be handled from now and into the future. This indicator-graph may contribute to a more transparent discussion of available policy options. (Author)

  6. PCA-based approach for subtracting thermal background emission in high-contrast imaging data

    Science.gov (United States)

    Hunziker, S.; Quanz, S. P.; Amara, A.; Meyer, M. R.

    2018-03-01

    Aims.Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 μm. The main purpose of this work is to analyse this background emission in infrared high-contrast imaging data as illustrative of the problem, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. Methods: We used principal component analysis (PCA) to model and subtract the thermal background emission in three archival high-contrast angular differential imaging datasets in the M' and L' filter. We used an M' dataset of β Pic to describe in detail how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme applied to the same dataset. Finally, both methods for background subtraction are compared by performing complete data reductions. We analysed the results from the M' dataset of HD 100546 only qualitatively. For the M' band dataset of β Pic and the L' band dataset of HD 169142, which was obtained with an angular groove phase mask vortex vector coronagraph, we also calculated and analysed the achieved signal-to-noise ratio (S/N). Results: We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the point spread function. In the complete data reductions, we find at least qualitative improvements for HD 100546 and HD 169142, however, we fail to find a significant increase in S/N of β Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or

  7. The first 1-year-long estimate of the Paris region fossil fuel CO2 emissions based on atmospheric inversion

    Directory of Open Access Journals (Sweden)

    J. Staufer

    2016-11-01

    Full Text Available The ability of a Bayesian atmospheric inversion to quantify the Paris region's fossil fuel CO2 emissions on a monthly basis, based on a network of three surface stations operated for 1 year as part of the CO2-MEGAPARIS experiment (August 2010–July 2011, is analysed. Differences in hourly CO2 atmospheric mole fractions between the near-ground monitoring sites (CO2 gradients, located at the north-eastern and south-western edges of the urban area, are used to estimate the 6 h mean fossil fuel CO2 emission. The inversion relies on the CHIMERE transport model run at 2 km  ×  2 km horizontal resolution, on the spatial distribution of fossil fuel CO2 emissions in 2008 from a local inventory established at 1 km  ×  1 km horizontal resolution by the AIRPARIF air quality agency, and on the spatial distribution of the biogenic CO2 fluxes from the C-TESSEL land surface model. It corrects a prior estimate of the 6 h mean budgets of the fossil fuel CO2 emissions given by the AIRPARIF 2008 inventory. We found that a stringent selection of CO2 gradients is necessary for reliable inversion results, due to large modelling uncertainties. In particular, the most robust data selection analysed in this study uses only mid-afternoon gradients if wind speeds are larger than 3 m s−1 and if the modelled wind at the upwind site is within ±15° of the transect between downwind and upwind sites. This stringent data selection removes 92 % of the hourly observations. Even though this leaves few remaining data to constrain the emissions, the inversion system diagnoses that their assimilation significantly reduces the uncertainty in monthly emissions: by 9 % in November 2010 to 50 % in October 2010. The inverted monthly mean emissions correlate well with independent monthly mean air temperature. Furthermore, the inverted annual mean emission is consistent with the independent revision of the AIRPARIF inventory for the year

  8. RESEARCH ON THE DIRECT CARBON EMISSION FORECAST OF CHINA'S PROVINCIAL RESIDENTS BASED ON NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2018-04-01

    Full Text Available Global climate change, which mainly effected by human carbon emissions, would affect the regional economic, natural ecological environment, social development and food security in the near future. It’s particularly important to make accurate predictions of carbon emissions based on current carbon emissions. This paper accounted out the direct consumption of carbon emissions data from 1995 to 2014 about 30 provinces (the data of Tibet, Hong Kong, Macao and Taiwan is missing and the whole of China. And it selected the optimal models from BP, RBF and Elman neural network for direct carbon emission prediction, what aim was to select the optimal prediction method and explore the possibility of reaching the peak of residents direct carbon emissions of China in 2030. Research shows that: 1 Residents’ direct carbon emissions per capita of all provinces showed an upward trend in 20 years. 2 The accuracy of the prediction results by Elman neural network model is higher than others and more suitable for carbon emission data projections. 3 With the situation of residents’ direct carbon emissions free development, the direct carbon emissions will show a fast to slow upward trend in the next few years and began to flatten after 2020, and the direct carbon emissions of per capita will reach the peak in 2032. This is also confirmed that China is expected to reach its peak in carbon emissions by 2030 in theory.

  9. Research on the Direct Carbon Emission Forecast of CHINA'S Provincial Residents Based on Neural Network

    Science.gov (United States)

    Zhang, T.; Zhou, B.; Zhou, S.; Yan, W.

    2018-04-01

    Global climate change, which mainly effected by human carbon emissions, would affect the regional economic, natural ecological environment, social development and food security in the near future. It's particularly important to make accurate predictions of carbon emissions based on current carbon emissions. This paper accounted out the direct consumption of carbon emissions data from 1995 to 2014 about 30 provinces (the data of Tibet, Hong Kong, Macao and Taiwan is missing) and the whole of China. And it selected the optimal models from BP, RBF and Elman neural network for direct carbon emission prediction, what aim was to select the optimal prediction method and explore the possibility of reaching the peak of residents direct carbon emissions of China in 2030. Research shows that: 1) Residents' direct carbon emissions per capita of all provinces showed an upward trend in 20 years. 2) The accuracy of the prediction results by Elman neural network model is higher than others and more suitable for carbon emission data projections. 3) With the situation of residents' direct carbon emissions free development, the direct carbon emissions will show a fast to slow upward trend in the next few years and began to flatten after 2020, and the direct carbon emissions of per capita will reach the peak in 2032. This is also confirmed that China is expected to reach its peak in carbon emissions by 2030 in theory.

  10. Impact of Carbon Quota Allocation Mechanism on Emissions Trading: An Agent-Based Simulation

    Directory of Open Access Journals (Sweden)

    Wei Jiang

    2016-08-01

    Full Text Available This paper establishes an agent-based simulation system of the carbon emissions trading in accordance with the complex feature of the trading process. This system analyzes the impact of the carbon quota allocation mechanism on emissions trading for three different aspects including the amount of emissions reduction, the economic effect on the emitters, and the emissions reduction cost. Based on the data of the carbon emissions of different industries in China, several simulations were made. The results indicate that the emissions trading policy can effectively reduce carbon emissions in a perfectly competitive market. Moreover, by comparing separate quota allocation mechanisms, we obtain the result that the scheme with a small extent quota decrease in a comprehensive allocation mechanism can minimize the unit carbon emission cost. Implementing this scheme can also achieve minimal effects of carbon emissions limitation on the economy on the basis that the environment is not destroyed. However, excessive quota decrease cannot promote the emitters to reduce emission. Taking into account that several developing countries have the dual task of limiting carbon emissions and developing the economy, it is necessary to adopt a comprehensive allocation mechanism of the carbon quota and increase the initial proportion of free allocation.

  11. Physical characterization of biomass-based pyrolysis liquids. Application of standard fuel oil analyses

    Energy Technology Data Exchange (ETDEWEB)

    Oasmaa, A; Leppaemaeki, E; Koponen, P; Levander, J; Tapola, E [VTT Energy, Espoo (Finland). Energy Production Technologies

    1998-12-31

    The main purpose of the study was to test the applicability of standard fuel oil methods developed for petroleum-based fuels to pyrolysis liquids. In addition, research on sampling, homogeneity, stability, miscibility and corrosivity was carried out. The standard methods have been tested for several different pyrolysis liquids. Recommendations on sampling, sample size and small modifications of standard methods are presented. In general, most of the methods can be used as such but the accuracy of the analysis can be improved by minor modifications. Fuel oil analyses not suitable for pyrolysis liquids have been identified. Homogeneity of the liquids is the most critical factor in accurate analysis. The presence of air bubbles may disturb in several analyses. Sample preheating and prefiltration should be avoided when possible. The former may cause changes in the composition and structure of the pyrolysis liquid. The latter may remove part of organic material with particles. The size of the sample should be determined on the basis of the homogeneity and the water content of the liquid. The basic analyses of the Technical Research Centre of Finland (VTT) include water, pH, solids, ash, Conradson carbon residue, heating value, CHN, density, viscosity, pourpoint, flash point, and stability. Additional analyses are carried out when needed. (orig.) 53 refs.

  12. 47 CFR 90.691 - Emission mask requirements for EA-based systems.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Emission mask requirements for EA-based systems. 90.691 Section 90.691 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND... of Ea-Based Smr Systems in the 809-824/851-869 Mhz Band § 90.691 Emission mask requirements for EA...

  13. Producing remote sensing-based emission estimates of prescribed burning in the contiguous United States for the U.S. Environmental Protection Agency 2011 National Emissions Inventory

    Science.gov (United States)

    McCarty, J. L.; Pouliot, G. A.; Soja, A. J.; Miller, M. E.; Rao, T.

    2013-12-01

    Prescribed fires in agricultural landscapes generally produce smaller burned areas than wildland fires but are important contributors to emissions impacting air quality and human health. Currently, there are a variety of available satellite-based estimates of crop residue burning, including the NOAA/NESDIS Hazard Mapping System (HMS) the Satellite Mapping Automated Reanalysis Tool for Fire Incident Reconciliation (SMARTFIRE 2), the Moderate Resolution Imaging Spectroradiometer (MODIS) Official Burned Area Product (MCD45A1)), the MODIS Direct Broadcast Burned Area Product (MCD64A1) the MODIS Active Fire Product (MCD14ML), and a regionally-tuned 8-day cropland differenced Normalized Burn Ratio product for the contiguous U.S. The purpose of this NASA-funded research was to refine the regionally-tuned product utilizing higher spatial resolution crop type data from the USDA NASS Cropland Data Layer and burned area training data from field work and high resolution commercial satellite data to improve the U.S. Environmental Protection Agency's (EPA) National Emissions Inventory (NEI). The final product delivered to the EPA included a detailed database of 25 different atmospheric emissions at the county level, emission distributions by crop type and seasonality, and GIS data. The resulting emission databases were shared with the U.S. EPA and regional offices, the National Wildfire Coordinating Group (NWGC) Smoke Committee, and all 48 states in the contiguous U.S., with detailed error estimations for Wyoming and Indiana and detailed analyses of results for Florida, Minnesota, North Dakota, Oklahoma, and Oregon. This work also provided opportunities in discovering the different needs of federal and state partners, including the various geospatial abilities and platforms across the many users and how to incorporate expert air quality, policy, and land management knowledge into quantitative earth observation-based estimations of prescribed fire emissions. Finally, this work

  14. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  15. Discharges of copper, zinc and lead to water and soil. Analysis of the emission pathways and possible emission reduction measures; Eintraege von Kuper, Zink und Blei in Gewaesser und Boeden. Analyse der Emissionspfade und moeglicher Emissionsminderungsmassnahmen

    Energy Technology Data Exchange (ETDEWEB)

    Hillenbrand, Thomas; Toussaint, Dominik; Boehm, Eberhard [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany); Fuchs, Stephan; Scherer, Ulrike [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Siedlungswasserwirtschaft; Rudolphi, Alexander; Hoffmann, Martin [Gesellschaft fuer Oekologische Bautechnik Berlin mbH (GFOeB) (Germany)

    2005-08-15

    Because of the pollution situation for copper, zinc and lead and due to the significance of non-point sources, there is a basic need for action to reduce the environmental burden due to non-point emissions of these heavy metals. Therefore the aim of the project was first to quantify the application-related discharges of these heavy metals into water and soil. Based on this, specific strategies to reduce the emissions to water were developed. Additionally a guideline for architects and builders for the outdoor use of the substances in the building sector was drawn up with the objective of supplying information and aids on the environmentally-compatible use of these substances. Furthermore, existing life cycle assessment methods were examined for the use of various roofing materials as well as the possibilities to further develop these methods. The results of the emission calculations show the great significancy of the application areas vehicles, building sector, water supply and other specific sources (i.e. galvanized products). The examination of different measures to reduce the emissions gives a review and an assessment of the possibilities, taking into account the relevant boundary conditions. This information can also serve as the basis for elaborating a programme of measures within the scope of a future river basin management. (orig.)

  16. FY01 Supplemental Science and Performance Analysis: Volume 1, Scientific Bases and Analyses

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  17. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  18. Estimates of future climate based on SRES emission scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Godal, Odd; Sygna, Linda; Fuglestvedt, Jan S.; Berntsen, Terje

    2000-02-14

    The preliminary emission scenarios in the Special Report on Emission Scenario (SRES) developed by the Intergovernmental Panel on Climate Change (IPCC), will eventually replace the old IS92 scenarios. By running these scenarios in a simple climate model (SCM) we estimate future temperature increase between 1.7 {sup o}C and 2.8 {sup o}C from 1990 to to 2100. The global sea level rise over the same period is between 0.33 m and 0.45 m. Compared to the previous IPCC scenarios (IS92) the SRES scenarios generally results in changes in both development over time and level of emissions, concentrations, radiative forcing, and finally temperature change and sea level rise. The most striking difference between the IS92 scenarios and the SRES scenarios is the lower level of SO{sub 2} emissions. The range in CO{sub 2} emissions is also expected to be narrower in the new scenarios. The SRES scenarios result in a narrower range both for temperature change and sea level rise from 1990 to 2100 compared to the range estimated for the IS92 scenarios. (author)

  19. Issues and approaches in risk-based aging analyses of passive components

    International Nuclear Information System (INIS)

    Uryasev, S.P.; Samanta, P.K.; Vesely, W.E.

    1994-01-01

    In previous NRC-sponsored work a general methodology was developed to quantify the risk contributions from aging components at nuclear plants. The methodology allowed Probabilistic Risk Analyses (PRAs) to be modified to incorporate the age-dependent component failure rates and also aging maintenance models to evaluate and prioritize the aging contributions from active components using the linear aging failure rate model and empirical components aging rates. In the present paper, this methodology is extended to passive components (for example, the pipes, heat exchangers, and the vessel). The analyses of passive components bring in issues different from active components. Here, we specifically focus on three aspects that need to be addressed in risk-based aging prioritization of passive components

  20. DESIGNING EAP MATERIALS BASED ON INTERCULTURAL CORPUS ANALYSES: THE CASE OF LOGICAL MARKERS IN RESEARCH ARTICLES

    Directory of Open Access Journals (Sweden)

    Pilar Mur Dueñas

    2009-10-01

    Full Text Available The ultimate aim of intercultural analyses in English for Academic Purposes is to help non-native scholars function successfully in the international disciplinary community in English. The aim of this paper is to show how corpus-based intercultural analyses can be useful to design EAP materials on a particular metadiscourse category, logical markers, in research article writing. The paper first describes the analysis carried out of additive, contrastive and consecutive logical markers in a corpus of research articles in English and in Spanish in a particular discipline, Business Management. Differences were found in their frequency and also in the use of each of the sub-categories. Then, five activities designed on the basis of these results are presented. They are aimed at raising Spanish Business scholars' awareness of the specific uses and pragmatic function of frequent logical markers in international research articles in English.

  1. Integrated approach for fusion multi-physics coupled analyses based on hybrid CAD and mesh geometries

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, Yuefeng, E-mail: yuefeng.qiu@kit.edu; Lu, Lei; Fischer, Ulrich

    2015-10-15

    Highlights: • Integrated approach for neutronics, thermal and structural analyses was developed. • MCNP5/6, TRIPOLI-4 were coupled with CFX, Fluent and ANSYS Workbench. • A novel meshing approach has been proposed for describing MC geometry. - Abstract: Coupled multi-physics analyses on fusion reactor devices require high-fidelity neutronic models, and flexible, accurate data exchanging between various calculation codes. An integrated coupling approach has been developed to enable the conversion of CAD, mesh, or hybrid geometries for Monte Carlo (MC) codes MCNP5/6, TRIPOLI-4, and translation of nuclear heating data for CFD codes Fluent, CFX and structural mechanical software ANSYS Workbench. The coupling approach has been implemented based on SALOME platform with CAD modeling, mesh generation and data visualization capabilities. A novel meshing approach has been developed for generating suitable meshes for MC geometry descriptions. The coupling approach has been concluded to be reliable and efficient after verification calculations of several application cases.

  2. Tracing common origins of Genomic Islands in prokaryotes based on genome signature analyses.

    Science.gov (United States)

    van Passel, Mark Wj

    2011-09-01

    Horizontal gene transfer constitutes a powerful and innovative force in evolution, but often little is known about the actual origins of transferred genes. Sequence alignments are generally of limited use in tracking the original donor, since still only a small fraction of the total genetic diversity is thought to be uncovered. Alternatively, approaches based on similarities in the genome specific relative oligonucleotide frequencies do not require alignments. Even though the exact origins of horizontally transferred genes may still not be established using these compositional analyses, it does suggest that compositionally very similar regions are likely to have had a common origin. These analyses have shown that up to a third of large acquired gene clusters that reside in the same genome are compositionally very similar, indicative of a shared origin. This brings us closer to uncovering the original donors of horizontally transferred genes, and could help in elucidating possible regulatory interactions between previously unlinked sequences.

  3. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories

  4. Industrial CO2 emissions in China based on the hypothetical extraction method: Linkage analysis

    International Nuclear Information System (INIS)

    Wang, Yuan; Wang, Wenqin; Mao, Guozhu; Cai, Hua; Zuo, Jian; Wang, Lili; Zhao, Peng

    2013-01-01

    Fossil fuel-related CO 2 emissions are regarded as the primary sources of global climate change. Unlike direct CO 2 emissions for each sector, CO 2 emissions associated with complex linkages among sectors are usually ignored. We integrated the input–output analysis with the hypothetical extraction method to uncover the in-depth characteristics of the inter-sectoral linkages of CO 2 emissions. Based on China's 2007 data, this paper compared the output and demand emissions of CO 2 among eight blocks. The difference between the demand and output emissions of a block indicates that CO 2 is transferred from one block to another. Among the sectors analyzed in this study, the Energy industry block has the greatest CO 2 emissions with the Technology industry, Construction and Service blocks as its emission's primary destinations. Low-carbon industries that have lower direct CO 2 emissions are deeply anchored to high-carbon ones. If no effective measures are taken to limit final demand emissions or adjust energy structure, shifting to an economy that is low-carbon industries oriented would entail a decrease in CO 2 emission intensity per unit GDP but an increase in overall CO 2 emissions in absolute terms. The results are discussed in the context of climate-change policy. - Highlights: • Quantitatively analyze the characteristics of inter-industrial CO 2 emission linkages. • Propose the linkage measuring method of CO 2 emissions based on the modified HEM. • Detect the energy industry is a key sector on the output of embodied carbon. • Conclude that low-carbon industries are deeply anchored to high-carbon industries

  5. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses.

    Science.gov (United States)

    Syrowatka, Ania; Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-26

    Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than

  6. Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses

    Science.gov (United States)

    Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn

    2016-01-01

    Background Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. Objective The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Methods Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Results Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however

  7. Heat supply to low energy dwellings in district heating areas. Analyses of CO{sub 2} emissions and electricity supply security; Varmeforsyning til lavenergiboliger i omraader med fjernvarmekonsesjon. Analyser av CO{sub 2}-utslipp og forsyningssikkerhet for elektrisitet

    Energy Technology Data Exchange (ETDEWEB)

    Thyholt, Marit

    2006-07-01

    Building low energy dwellings in large development projects is a new situation in Norway. The municipalities have to a little extent analyzed the consequences of this new housing standard with respect to the energy supply to such areas, and how this standard may change the plans for new or extended district heat production. In the provision about the mandatory connection to district heating plants, and the appendant provision related to a heating system that can utilize district heat, the district heat supply and the heat demand are not seen in connection. The objective of this dissertation is to provide the municipalities with a basis for decision making in the processing of applications concerning dispensation from the mandatory connection or the heating system requirement for dwellings with low heat demand. This basis for decision making is based on the national aim of reducing carbon dioxide (CO{sub 2}) emissions and of improving the electricity supply security. This summary provides an abstract from the discussion of the legislation as an incentive or barrier for building low energy dwellings. An abstract from a survey among construction firms concerning the motivation for building low energy dwellings is also included. In addition, the summary provides a comprehensive abstract of the results from the CO{sub 2} emission calculations, and the basis for these calculations. Introductorily a brief background of the national focus on energy savings and increased use of hydronic heating, including district heat, is given.

  8. A web-based endpoint adjudication system for interim analyses in clinical trials.

    Science.gov (United States)

    Nolen, Tracy L; Dimmick, Bill F; Ostrosky-Zeichner, Luis; Kendrick, Amy S; Sable, Carole; Ngai, Angela; Wallace, Dennis

    2009-02-01

    A data monitoring committee (DMC) is often employed to assess trial progress and review safety data and efficacy endpoints throughout a trail. Interim analyses performed for the DMC should use data that are as complete and verified as possible. Such analyses are complicated when data verification involves subjective study endpoints or requires clinical expertise to determine each subject's status with respect to the study endpoint. Therefore, procedures are needed to obtain adjudicated data for interim analyses in an efficient manner. In the past, methods for handling such data included using locally reported results as surrogate endpoints, adjusting analysis methods for unadjudicated data, or simply performing the adjudication as rapidly as possible. These methods all have inadequacies that make their sole usage suboptimal. For a study of prophylaxis for invasive candidiasis, adjudication of both study eligibility criteria and clinical endpoints prior to two interim analyses was required. Because the study was expected to enroll at a moderate rate and the sponsor required adjudicated endpoints to be used for interim analyses, an efficient process for adjudication was required. We created a web-based endpoint adjudication system (WebEAS) that allows for expedited review by the endpoint adjudication committee (EAC). This system automatically identifies when a subject's data are complete, creates a subject profile from the study data, and assigns EAC reviewers. The reviewers use the WebEAS to review the subject profile and submit their completed review form. The WebEAS then compares the reviews, assigns an additional review as a tiebreaker if needed, and stores the adjudicated data. The study for which this system was originally built was administratively closed after 10 months with only 38 subjects enrolled. The adjudication process was finalized and the WebEAS system activated prior to study closure. Some website accessibility issues presented initially. However

  9. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  10. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  11. CrusView: a Java-based visualization platform for comparative genomics analyses in Brassicaceae species.

    Science.gov (United States)

    Chen, Hao; Wang, Xiangfeng

    2013-09-01

    In plants and animals, chromosomal breakage and fusion events based on conserved syntenic genomic blocks lead to conserved patterns of karyotype evolution among species of the same family. However, karyotype information has not been well utilized in genomic comparison studies. We present CrusView, a Java-based bioinformatic application utilizing Standard Widget Toolkit/Swing graphics libraries and a SQLite database for performing visualized analyses of comparative genomics data in Brassicaceae (crucifer) plants. Compared with similar software and databases, one of the unique features of CrusView is its integration of karyotype information when comparing two genomes. This feature allows users to perform karyotype-based genome assembly and karyotype-assisted genome synteny analyses with preset karyotype patterns of the Brassicaceae genomes. Additionally, CrusView is a local program, which gives its users high flexibility when analyzing unpublished genomes and allows users to upload self-defined genomic information so that they can visually study the associations between genome structural variations and genetic elements, including chromosomal rearrangements, genomic macrosynteny, gene families, high-frequency recombination sites, and tandem and segmental duplications between related species. This tool will greatly facilitate karyotype, chromosome, and genome evolution studies using visualized comparative genomics approaches in Brassicaceae species. CrusView is freely available at http://www.cmbb.arizona.edu/CrusView/.

  12. A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions

    International Nuclear Information System (INIS)

    Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng

    2014-01-01

    Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information

  13. Quantifying the climate impact of emissions from land-based transport in Germany

    OpenAIRE

    Hendricks, J.; Righi, M.; Dahlmann, K.; Gottschaldt, K.-D.; Grewe, V.; Ponater, M.; Sausen, R.; Heinrichs, D.; Winkler, C.; Wolfermann, A.; Kampffmeyer, T.; Friedrich, R; Klötzke, M.; Kugler, U.

    2017-01-01

    Although climate change is a global problem, specific mitigation measures are frequently applied on regional or national scales only. This is the case in particular for measures to reduce the emissions of land-based transport, which is largely characterized by regional or national systems with independent infrastructure, organization, and regulation. The climate perturbations caused by regional transport emissions are small compared to those resulting from global emissions. Consequently, they...

  14. A Game-based Corpus for Analysing the Interplay between Game Context and Player Experience

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Asteriadis, Stylianos

    2011-01-01

    present dierent types of information that have been extracted from game context, player preferences and perception of the game, as well as user features, automatically extracted from video recordings.We run a number of initial experiments to analyse players' behavior while playing video games as a case......Recognizing players' aective state while playing video games has been the focus of many recent research studies. In this paper we describe the process that has been followed to build a corpus based on game events and recorded video sessions from human players while playing Super Mario Bros. We...

  15. How distributed processing produces false negatives in voxel-based lesion-deficit analyses.

    Science.gov (United States)

    Gajardo-Vidal, Andrea; Lorca-Puls, Diego L; Crinion, Jennifer T; White, Jitrachote; Seghier, Mohamed L; Leff, Alex P; Hope, Thomas M H; Ludersdorfer, Philipp; Green, David W; Bowman, Howard; Price, Cathy J

    2018-07-01

    In this study, we hypothesized that if the same deficit can be caused by damage to one or another part of a distributed neural system, then voxel-based analyses might miss critical lesion sites because preservation of each site will not be consistently associated with preserved function. The first part of our investigation used voxel-based multiple regression analyses of data from 359 right-handed stroke survivors to identify brain regions where lesion load is associated with picture naming abilities after factoring out variance related to object recognition, semantics and speech articulation so as to focus on deficits arising at the word retrieval level. A highly significant lesion-deficit relationship was identified in left temporal and frontal/premotor regions. Post-hoc analyses showed that damage to either of these sites caused the deficit of interest in less than half the affected patients (76/162 = 47%). After excluding all patients with damage to one or both of the identified regions, our second analysis revealed a new region, in the anterior part of the left putamen, which had not been previously detected because many patients had the deficit of interest after temporal or frontal damage that preserved the left putamen. The results illustrate how (i) false negative results arise when the same deficit can be caused by different lesion sites; (ii) some of the missed effects can be unveiled by adopting an iterative approach that systematically excludes patients with lesions to the areas identified in previous analyses, (iii) statistically significant voxel-based lesion-deficit mappings can be driven by a subset of patients; (iv) focal lesions to the identified regions are needed to determine whether the deficit of interest is the consequence of focal damage or much more extensive damage that includes the identified region; and, finally, (v) univariate voxel-based lesion-deficit mappings cannot, in isolation, be used to predict outcome in other patients

  16. 40 CFR 52.2036 - 1990 base year emission inventory.

    Science.gov (United States)

    2010-07-01

    ... Oxygen Furnace Shop, Blast Furnace Casthouse), submitted June 10, 1996, are approved. Sharon Steel... cars, flare stack, tuyeres) are 0.4 TPY and 49.3 TPY, respectively. The 1990 VOC and NOX emissions from the Basic Oxygen Furnace Shop (scrap preheating, ladle preheating and heaters) are 1.4 TPY and 39.6...

  17. TRANSIT BUS LOAD-BASED MODAL EMISSION RATE MODEL DEVELOPMENT

    Science.gov (United States)

    Heavy-duty diesel vehicles (HDDVs) operations are a major source of oxides of nitrogen (NOx) and particulate matter (PM) emissions in metropolitan area nationwide. Although HD¬DVs constitute a small portion of the on-road fleet, they typically contribute more than 45% of NOx and ...

  18. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  19. Secondary electron emission yield on poled silica based thick films

    DEFF Research Database (Denmark)

    Braga, D.; Poumellec, B.; Cannas, V.

    2004-01-01

    Studies on the distribution of the electric field produced by a thermal poling process in a layer of Ge-doped silica on silicon substrate, by using secondary electron emission yield (SEEY) measurements () are presented. Comparing 0 between poled and unpoled areas, the SEEY at the origin of electr...

  20. Noise emission corrections at intersections based on microscopic traffic simulation

    NARCIS (Netherlands)

    Coensel, B.de; Vanhove, F.; Logghe, S.; Wilmink, I.; Botteldooren, D.

    2006-01-01

    One of the goals of the European IMAGINE project, is to formulate strategies to improve traffic modelling for application in noise mapping. It is well known that the specific deceleration and acceleration dynamics of traffic at junctions can influence local noise emission. However, macroscopic

  1. Emission Modeling of an Interturbine Burner Based on Flameless Combustion

    NARCIS (Netherlands)

    Perpignan, A.A.V.; Talboom, M.G.; Levy, Yeshayahou; Gangoli Rao, A.

    2018-01-01

    Since its discovery, the flameless combustion (FC) regime has been a promising alternative to reduce pollutant emissions of gas turbine engines. This combustion mode is characterized by well-distributed reaction zones, which potentially decreases temperature gradients, acoustic oscillations, and

  2. Programs in Fortran language for reporting the results of the analyses by ICP emission spectroscopy; Programas en lenguaje Fortran para la informacion de los resultados de los analisis efectuados mediante Espectroscopia Optica de emision con fuente de plasma

    Energy Technology Data Exchange (ETDEWEB)

    Roca, M

    1985-07-01

    Three programs, written in FORTRAN IV language, for reporting the results of the analyses by ICP emission spectroscopy from data stored in files on floppy disks have been developed. They are intended, respectively, for the analyses of: 1) waters, 2) granites and slates, and 3) different kinds of geological materials. (Author) 8 refs.

  3. [Estimation of VOC emission from forests in China based on the volume of tree species].

    Science.gov (United States)

    Zhang, Gang-feng; Xie, Shao-dong

    2009-10-15

    Applying the volume data of dominant trees from statistics on the national forest resources, volatile organic compounds (VOC) emissions of each main tree species in China were estimated based on the light-temperature model put forward by Guenther. China's VOC emission inventory for forest was established, and the space-time and age-class distributions of VOC emission were analyzed. The results show that the total VOC emissions from forests in China are 8565.76 Gg, of which isoprene is 5689.38 Gg (66.42%), monoterpenes is 1343.95 Gg (15.69%), and other VOC is 1532.43 Gg (17.89%). VOC emissions have significant species variation. Quercus is the main species responsible for emission, contributing 45.22% of the total, followed by Picea and Pinus massoniana with 6.34% and 5.22%, respectively. Southwest and Northeast China are the major emission regions. In specific, Yunnan, Sichuan, Heilongjiang, Jilin and Shaanxi are the top five provinces producing the most VOC emissions from forests, and their contributions to the total are 15.09%, 12.58%, 10.35%, 7.49% and 7.37%, respectively. Emissions from these five provinces occupy more than half (52.88%) of the national emissions. Besides, VOC emissions show remarkable seasonal variation. Emissions in summer are the largest, accounting for 56.66% of the annual. Forests of different ages have different emission contribution. Half-mature forests play a key role and contribute 38.84% of the total emission from forests.

  4. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  5. Improving the safety of a body composition analyser based on the PGNAA method

    Energy Technology Data Exchange (ETDEWEB)

    Miri-Hakimabad, Hashem; Izadi-Najafabadi, Reza; Vejdani-Noghreiyan, Alireza; Panjeh, Hamed [FUM Radiation Detection And Measurement Laboratory, Ferdowsi University of Mashhad (Iran, Islamic Republic of)

    2007-12-15

    The {sup 252}Cf radioisotope and {sup 241}Am-Be are intense neutron emitters that are readily encapsulated in compact, portable and sealed sources. Some features such as high flux of neutron emission and reliable neutron spectrum of these sources make them suitable for the prompt gamma neutron activation analysis (PGNAA) method. The PGNAA method can be used in medicine for neutron radiography and body chemical composition analysis. {sup 252}Cf and {sup 241}Am-Be sources generate not only neutrons but also are intense gamma emitters. Furthermore, the sample in medical treatments is a human body, so it may be exposed to the bombardments of these gamma-rays. Moreover, accumulations of these high-rate gamma-rays in the detector volume cause simultaneous pulses that can be piled up and distort the spectra in the region of interest (ROI). In order to remove these disadvantages in a practical way without being concerned about losing the thermal neutron flux, a gamma-ray filter made of Pb must be employed. The paper suggests a relatively safe body chemical composition analyser (BCCA) machine that uses a spherical Pb shield, enclosing the neutron source. Gamma-ray shielding effects and the optimum radius of the spherical Pb shield have been investigated, using the MCNP-4C code, and compared with the unfiltered case, the bare source. Finally, experimental results demonstrate that an optimised gamma-ray shield for the neutron source in a BCCA can reduce effectively the risk of exposure to the {sup 252}Cf and {sup 241}Am-Be sources.

  6. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    Science.gov (United States)

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on

  7. Particulate matter emission modelling based on soot and SOF from direct injection diesel engines

    International Nuclear Information System (INIS)

    Tan, P.Q.; Hu, Z.Y.; Deng, K.Y.; Lu, J.X.; Lou, D.M.; Wan, G.

    2007-01-01

    Particulate matter (PM) emission is one of the major pollutants from diesel engines, and it is harmful for human health and influences the atmospheric visibility. In investigations for reducing PM emission, a simulation model for PM emission is a useful tool. In this paper, a phenomenological, composition based PM model of direct injection (DI) diesel engines has been proposed and formulated to simulate PM emission. The PM emission model is based on a quasi-dimensional multi-zone combustion model using the formation mechanisms of the two main compositions of PM: soot and soluble organic fraction (SOF). First, the quasi-dimensional multi-zone combustion model is given. Then, two models for soot and SOF emissions are established, respectively, and after that, the two models are integrated into a single PM emission model. The soot emission model is given by the difference between a primary formation model and an oxidation model of soot. The soot primary formation model is the Hiroyasu soot formation model, and the Nagle and Strickland-Constable model is adopted for soot oxidation. The SOF emission model is based on an unburned hydrocarbons (HC) emission model, and the HC emission model is given by the difference between a HC primary formation model and a HC oxidation model. The HC primary formation model considers fuel injected and mixed beyond the lean combustion limit during ignition delay and fuel effusing from the nozzle sac volume at low pressure and low velocity. In order to validate the PM emission model, experiments were performed on a six cylinder, turbocharged and intercooled DI diesel engine. The simulation results show good agreement with the experimental data, which indicates the validity of the PM emission model. The calculation results show that the distinctions between PM and soot formation rates are mainly in the early combustion stage. The SOF formation has an important influence on the PM formation at lower loads, and soot formation dominates the

  8. Genome-based comparative analyses of Antarctic and temperate species of Paenibacillus.

    Directory of Open Access Journals (Sweden)

    Melissa Dsouza

    Full Text Available Antarctic soils represent a unique environment characterised by extremes of temperature, salinity, elevated UV radiation, low nutrient and low water content. Despite the harshness of this environment, members of 15 bacterial phyla have been identified in soils of the Ross Sea Region (RSR. However, the survival mechanisms and ecological roles of these phyla are largely unknown. The aim of this study was to investigate whether strains of Paenibacillus darwinianus owe their resilience to substantial genomic changes. For this, genome-based comparative analyses were performed on three P. darwinianus strains, isolated from gamma-irradiated RSR soils, together with nine temperate, soil-dwelling Paenibacillus spp. The genome of each strain was sequenced to over 1,000-fold coverage, then assembled into contigs totalling approximately 3 Mbp per genome. Based on the occurrence of essential, single-copy genes, genome completeness was estimated at approximately 88%. Genome analysis revealed between 3,043-3,091 protein-coding sequences (CDSs, primarily associated with two-component systems, sigma factors, transporters, sporulation and genes induced by cold-shock, oxidative and osmotic stresses. These comparative analyses provide an insight into the metabolic potential of P. darwinianus, revealing potential adaptive mechanisms for survival in Antarctic soils. However, a large proportion of these mechanisms were also identified in temperate Paenibacillus spp., suggesting that these mechanisms are beneficial for growth and survival in a range of soil environments. These analyses have also revealed that the P. darwinianus genomes contain significantly fewer CDSs and have a lower paralogous content. Notwithstanding the incompleteness of the assemblies, the large differences in genome sizes, determined by the number of genes in paralogous clusters and the CDS content, are indicative of genome content scaling. Finally, these sequences are a resource for further

  9. Introduction of a method for presenting health-based impacts of the emission from products, based on emission measurements of materials used in manufacturing of the products

    Energy Technology Data Exchange (ETDEWEB)

    Jørgensen, Rikke Bramming, E-mail: rikke.jorgensen@iot.ntnu.no

    2013-11-15

    A method for presenting the health impact of emissions from furniture is introduced, which could be used in the context of environmental product declarations. The health impact is described by the negative indoor air quality potential, the carcinogenic potential, the mutagenic and reprotoxic potential, the allergenic potential, and the toxicological potential. An experimental study of emissions from four pieces of furniture is performed by testing both the materials used for production of the furniture and the complete piece of furniture, in order to compare the results gained by adding emissions of material with results gained from testing the finished piece of furniture. Calculating the emission from a product based on the emission from materials used in the manufacture of the product is a new idea. The relation between calculated results and measured results from the same products differ between the four pieces of furniture tested. Large differences between measured and calculated values are seen for leather products. More knowledge is needed to understand why these differences arise. Testing materials allows us to compare different suppliers of the same material. Four different foams and three different timber materials are tested, and the results vary between materials of the same type. If the manufacturer possesses this type of knowledge of the materials from the subcontractors it could be used as a selection criterion according to production of low emission products. -- Highlights: • A method for presenting health impact of emissions is introduced. • An experimental study of emissions from four pieces of furniture is performed. • Health impact is calculated based on sum of contribution from the materials used. • Calculated health impact is compared to health impact of the manufactured product. • The results show that health impact could be useful in product development and for presentation in EPDs.

  10. Greenhouse gas emissions trading and project-based mechanisms. Proceedings - CATEP

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    Greenhouse gas emissions trading and project-based mechanisms for greenhouse gas reduction are emerging market-based instruments for climate change policy. This book presents a selection of papers from an international workshop co-sponsored by the OECD and Concerted Action on Tradeable Emissions Permits (CATEP), to discuss key research and policy issues relating to the design and implementation of these instruments. The papers cover the experience of developing and transition countries with greenhouse gas emissions trading and project-based mechanisms. In addition, the papers examine the use of tradeable permits in policy mixes and harmonisation of emissions trading schemes, as well as transition issues relating to greenhouse gas emissions trading markets.

  11. MULTI-DIMENSIONAL MASS SPECTROMETRY-BASED SHOTGUN LIPIDOMICS AND NOVEL STRATEGIES FOR LIPIDOMIC ANALYSES

    Science.gov (United States)

    Han, Xianlin; Yang, Kui; Gross, Richard W.

    2011-01-01

    Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525

  12. An Integrated Software Suite for Surface-based Analyses of Cerebral Cortex

    Science.gov (United States)

    Van Essen, David C.; Drury, Heather A.; Dickson, James; Harwell, John; Hanlon, Donna; Anderson, Charles H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database. PMID:11522765

  13. An integrated software suite for surface-based analyses of cerebral cortex

    Science.gov (United States)

    Van Essen, D. C.; Drury, H. A.; Dickson, J.; Harwell, J.; Hanlon, D.; Anderson, C. H.

    2001-01-01

    The authors describe and illustrate an integrated trio of software programs for carrying out surface-based analyses of cerebral cortex. The first component of this trio, SureFit (Surface Reconstruction by Filtering and Intensity Transformations), is used primarily for cortical segmentation, volume visualization, surface generation, and the mapping of functional neuroimaging data onto surfaces. The second component, Caret (Computerized Anatomical Reconstruction and Editing Tool Kit), provides a wide range of surface visualization and analysis options as well as capabilities for surface flattening, surface-based deformation, and other surface manipulations. The third component, SuMS (Surface Management System), is a database and associated user interface for surface-related data. It provides for efficient insertion, searching, and extraction of surface and volume data from the database.

  14. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    International Nuclear Information System (INIS)

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-01-01

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an ''importance factor'' has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor

  15. Analyses of criticality and reactivity for TRACY experiments based on JENDL-3.3 data library

    International Nuclear Information System (INIS)

    Sono, Hiroki; Miyoshi, Yoshinori; Nakajima, Ken

    2003-01-01

    The parameters on criticality and reactivity employed for computational simulations of the TRACY supercritical experiments were analyzed using a recently revised nuclear data library, JENDL-3.3. The parameters based on the JENDL-3.3 library were compared to those based on two former-used libraries, JENDL-3.2 and ENDF/B-VI. In the analyses computational codes, MVP, MCNP version 4C and TWOTRAN, were used. The following conclusions were obtained from the analyses: (1) The computational biases of the effective neutron multiplication factor attributable to the nuclear data libraries and to the computational codes do not depend the TRACY experimental conditions such as fuel conditions. (2) The fractional discrepancies in the kinetic parameters and coefficients of reactivity are within ∼5% between the three libraries. By comparison between calculations and measurements of the parameters, the JENDL-3.3 library is expected to give closer values to the measurements than the JENDL-3.2 and ENDF/B-VI libraries. (3) While the reactivity worth of transient rods expressed in the $ unit shows ∼5% discrepancy between the three libraries according to their respective β eff values, there is little discrepancy in that expressed in the Δk/k unit. (author)

  16. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  17. Policy design and performance of emissions trading markets: an adaptive agent-based analysis.

    Science.gov (United States)

    Bing, Zhang; Qinqin, Yu; Jun, Bi

    2010-08-01

    Emissions trading is considered to be a cost-effective environmental economic instrument for pollution control. However, the pilot emissions trading programs in China have failed to bring remarkable success in the campaign for pollution control. The policy design of an emissions trading program is found to have a decisive impact on its performance. In this study, an artificial market for sulfur dioxide (SO2) emissions trading applying the agent-based model was constructed. The performance of the Jiangsu SO2 emissions trading market under different policy design scenario was also examined. Results show that the market efficiency of emissions trading is significantly affected by policy design and existing policies. China's coal-electricity price system is the principal factor influencing the performance of the SO2 emissions trading market. Transaction costs would also reduce market efficiency. In addition, current-level emissions discharge fee/tax and banking mechanisms do not distinctly affect policy performance. Thus, applying emissions trading in emission control in China should consider policy design and interaction with other existing policies.

  18. Greenhouse Gas Emission Intensities for the Livestock Sector in Indonesia, Based on the National Specific Data

    Directory of Open Access Journals (Sweden)

    Eska Nugrahaeningtyas

    2018-06-01

    Full Text Available The aims of this study were to calculate greenhouse gas (GHG emissions and to identify the trends of GHG emission intensity, based on meat production from the livestock sector in Indonesia, which had not been done before. The total emissions from the livestock sector from 2000 to 2015 in Indonesia were calculated using the 2006 Intergovernmental Panel on Climate Change Guideline (2006 IPCC GL using Tier 1 and Tier 2, with its default values and some of the country specific data that were found in the grey literature. During 2000 to 2015, the change from the Tier 1 to Tier 2 methods resulted in an approximately 7.39% emission decrease from enteric fermentation and a 4.24% increase from manure management, which resulted in a 4.98% decrease in the total emissions. The shared emission from manure management increased by about 9% and 6% using Tier 1 and Tier 2, respectively. In contrast with the total emissions, the overall emission intensity in Indonesia decreased (up to 60.77% for swine, showing that the livestock productivity in Indonesia has become more efficient. In order to meet the meat demand with less GHG emissions, chicken farming is one option to be developed. The increased emission and share from manure management indicated that manure management system needs to be of concern, especially for beef cattle and swine.

  19. Benefits of Exercise Training For Computer-Based Staff: A Meta Analyses

    Directory of Open Access Journals (Sweden)

    Mothna Mohammed

    2017-04-01

    Full Text Available Background: Office workers sit down to work for approximately 8 hours a day and, as a result, many of them do not have enough time for any form of physical exercise. This can lead to musculoskeletal discomforts, especially low back pain and recently, many researchers focused on home/office-based exercise training for prevention/treatment of low back pain among this population. Objective: This Meta analyses paper tried to discuss about the latest suggested exercises for the office workers based on the mechanisms and theories behind low back pain among office workers. Method: In this Meta analyses the author tried to collect relevant papers which were published previously on the subject. Google Scholar, Scopus, and PubMed were used as sources to find the articles. Only articles that were published using the same methodology, including office workers, musculoskeletal discomforts, low back pain, and exercise training keywords, were selected. Studies that failed to report sufficient sample statistics, or lacked a substantial review of past academic scholarship and/or clear methodologies, were excluded. Results: Limited evidence regarding the prevention of, and treatment methods for, musculoskeletal discomfort, especially those in the low back, among office workers, is available. The findings showed that training exercises had a significant effect (p<0.05 on low back pain discomfort scores and decreased pain levels in response to office-based exercise training. Conclusion: Office-based exercise training can affect pain/discomfort scores among office workers through positive effects on flexibility and strength of muscles. As such, it should be suggested to occupational therapists as a practical way for the treatment/prevention of low back pain among office workers.

  20. Greenhouse Gas Emissions of Tourism-Based Leisure Farms in Taiwan

    Directory of Open Access Journals (Sweden)

    Kuo-Tsang Huang

    2015-08-01

    Full Text Available This research is the first attempt of a carbon emission investigation of tourism-based farms. A total of 36 cases were investigated. The result reveals that each tourist returns an average revenue of 28.6 USD and generates an average 10.9 kg-CO2eq per visit of carbon emissions. The average carbon emission density for each land area is 8.2 t/ha·year and is 245 kg/m²·year for each floor area. It is estimated that the overall carbon emissions reach 321,751 tons annually. The tourism-based farms were clustered into five categories, based on their business characteristics. It was found that high-end vacation leisure farms produce 2.46 times the carbon emissions than natural eco-conservation farms. Carbon emissions were 42% higher than the annual average in July and August. A secondary high season is in February, but it is merely higher than the annual average by 8% because of the mild climate. Two significant models for predicting carbon emissions were constructed by stepwise regression. As agriculture administrative authorities in Taiwan gradually have begun admitting the cultivated lands for multi-purpose usage, tourism-based farms have been increasing drastically. This study provides references for both public authorities and farm managers in exploring the issues with regard to carbon emissions and farm sustainability.

  1. A proposal of Fourier-Bessel expansion with optimized ensembles of bases to analyse two dimensional image

    Science.gov (United States)

    Yamasaki, K.; Fujisawa, A.; Nagashima, Y.

    2017-09-01

    It is a critical issue to find the best set of fitting function bases in mode structural analysis of two dimensional images like plasma emission profiles. The paper proposes a method to optimize a set of the bases in the case of Fourier-Bessel function series, using their orthonormal property, for more efficient and precise analysis. The method is applied on a tomography image of plasma emission obtained with the Maximum-likelihood expectation maximization method in a linear cylindrical device. The result demonstrates the excellency of the method that realizes the smaller residual error and minimum Akaike information criterion using smaller number of fitting function bases.

  2. The influence of biomass supply chains and by-products on the greenhouse gas emissions from gasification-based bio-SNG production systems

    International Nuclear Information System (INIS)

    Holmgren, Kristina M.; Berntsson, Thore S.; Andersson, Eva; Rydberg, Tomas

    2015-01-01

    This study analyses the impact on the GHG (greenhouse gas) emissions of the raw material supply chain, the utilisation of excess heat and CO 2 storage for a bio-SNG (biomass gasification-based synthetic natural gas) system by applying a consequential life cycle assessment approach. The impact of the biomass supply chain is analysed by assessing GHG emissions of locally produced woodchips and pellets with regional or transatlantic origin. Results show that the supply area for the gasification plant can be substantially increased with only modest increases in overall GHG emissions (3–5%) by using regionally produced pellets. The transatlantic pellet chains contribute to significantly higher GHG emissions. Utilising excess heat for power generation or steam delivery for industrial use contributes to lower emissions from the system, whereas delivery of district heating can contribute to either increased or decreased emissions. The production technology of the replaced heat and the carbon intensity of the reference power production were decisive for the benefits of the heat deliveries. Finally, the storage of CO 2 separated from the syngas upgrading and from the flue gases of the gasifier can nearly double the GHG emission reduction potential of the bio-SNG system. - Highlights: • Greenhouse gas emission evaluation of gasification-based bio-SNG system is made. • The impact of biomass supply chains and utilisation of excess heat is in focus. • Locally produced woodchips result in lowest overall greenhouse gas emissions. • Regionally produced pellets have small impact on overall greenhouse gas emissions. • Storing separated CO 2 from the bio-SNG process reduces the GHG impact significantly.

  3. Greenhouse gas emission inventory based on full energy chain analysis

    International Nuclear Information System (INIS)

    Dones, R.; Hirschberg, S.; Knoepfel, I.

    1996-01-01

    Methodology, characteristics, features and results obtained for greenhouse gases within the recent Swiss LCA study 'Environmental Life-Cycle Inventories of Energy Systems' are presented. The focus of the study is on existing average Full Energy Chains (FENCHs) in the electricity generation mixes in Europe and in Switzerland. The systems, including coal (hard coal and lignite), oil, natural gas, nuclear and hydro, are discussed one by one as well as part of the electricity mixes. Photovoltaic systems are covered separately since they are not included in the electricity mixes. A sensitivity analysis on methane leakage during long-range transport via pipeline is shown. Whilst within the current study emissions are not attributed to specific countries, the main sectors contributing to the total GHGs emissions calculated for the various FENCHs are specified. (author). 10 refs, 10 figs, 9 tabs

  4. Greenhouse gas emission inventory based on full energy chain analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dones, R; Hirschberg, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Knoepfel, I [Federal Inst. of Technology Zurich, Zurich (Switzerland)

    1996-07-01

    Methodology, characteristics, features and results obtained for greenhouse gases within the recent Swiss LCA study `Environmental Life-Cycle Inventories of Energy Systems` are presented. The focus of the study is on existing average Full Energy Chains (FENCHs) in the electricity generation mixes in Europe and in Switzerland. The systems, including coal (hard coal and lignite), oil, natural gas, nuclear and hydro, are discussed one by one as well as part of the electricity mixes. Photovoltaic systems are covered separately since they are not included in the electricity mixes. A sensitivity analysis on methane leakage during long-range transport via pipeline is shown. Whilst within the current study emissions are not attributed to specific countries, the main sectors contributing to the total GHGs emissions calculated for the various FENCHs are specified. (author). 10 refs, 10 figs, 9 tabs.

  5. Saturated virtual fluorescence emission difference microscopy based on detector array

    Science.gov (United States)

    Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Ge, Baoliang; Wang, Wensheng; Liu, Xu

    2017-07-01

    Virtual fluorescence emission difference microscopy (vFED) has been proposed recently to enhance the lateral resolution of confocal microscopy with a detector array, implemented by scanning a doughnut-shaped pattern. Theoretically, the resolution can be enhanced by around 1.3-fold compared with that in confocal microscopy. For further improvement of the resolving ability of vFED, a novel method is presented utilizing fluorescence saturation for super-resolution imaging, which we called saturated virtual fluorescence emission difference microscopy (svFED). With a point detector array, matched solid and hollow point spread functions (PSF) can be obtained by photon reassignment, and the difference results between them can be used to boost the transverse resolution. Results show that the diffraction barrier can be surpassed by at least 34% compared with that in vFED and the resolution is around 2-fold higher than that in confocal microscopy.

  6. Base Carbone. Documentation about the emission factors of the Base CarboneR database

    International Nuclear Information System (INIS)

    2014-01-01

    The Base Carbone R is a public database of emission factors as required for carrying out carbon accounting exercises. It is administered by ADEME, but its governance involves many stakeholders and it can be added to freely. The articulation and convergence of environmental regulations requires data homogenization. The Base Carbone R proposes to be this centralized data source. Today, it is the reference database for article 75 of the Grenelle II Act. It is also entirely consistent with article L1341-3 of the French Transport Code and the default values of the European emission quotas exchange system. The data of the Base Carbone R can be freely consulted by all. Furthermore, the originality of this tool is that it enables third parties to propose their own data (feature scheduled for February 2015). These data are then assessed for their quality and transparency, then validated or refused for incorporation in the Base Carbone R . Lastly, a forum (planned for February 2015) will enable users to ask questions about the data, or to contest the data. The administration of the Base Carbone R is handled by ADEME. However, its orientation and the data that it contains are validated by a governance committee incorporating various public and private stakeholders. Lastly, transparency is one of the keystones of the Base Carbone R . Documentation details the hypotheses underlying the construction of all the data in the base, and refers to the studies that have enabled their construction. This document brings together the different versions of the Base Carbone R documentation: the most recent version (v11.5) and the previous versions (v11.0) which is shared in 2 parts dealing with the general case and with the specific case of overseas territories

  7. Allowable CO2 emissions based on regional and impact-related climate targets: The role of land processes

    Science.gov (United States)

    Seneviratne, S. I.; Donat, M.; Pitman, A.; Knutti, R.; Wilby, R.; Vogel, M.; Orth, R.

    2016-12-01

    Global temperature targets, such as the widely accepted "2° and 1.5° targets", may fail to communicate the urgency of reducing CO2 emissions because they are disconnected from their implications. The translation of CO2 emissions into regional- and impact-related climate targets is more powerful because such targets are more directly aligned with individual national interests. A recent publication (Seneviratne et al. 2016, Nature) reveals that regional changes in extreme temperatures and precipitation scale robustly with global temperature across scenarios, and thus with cumulative CO2 emissions. They thus allow a better communication of implied regional impacts associated with global targets for CO2 emissions. However, the regional responses are very varied and display strong differences in regional temperature and hydrological sensitivity. Process-based based analyses explain these divergences and highlight avenues for reducing uncertainties in regional projections of extremes, in particular related to the role of land-atmosphere feedbacks. These results have important implications for the design of regional mitigation and climate adaptation policies, for instance related to land use changes. Reference: Seneviratne, S.I., M.G. Donat, A.J. Pitman, R. Knutti, and R. Wilby, 2016, Nature, 529, 477-483, doi:10.1038/nature16542

  8. Positron emission tomography, physical bases and comparaison with other techniques

    International Nuclear Information System (INIS)

    Guermazi, Fadhel; Hamza, F; Amouri, W.; Charfeddine, S.; Kallel, S.; Jardak, I.

    2013-01-01

    Positron emission tomography (PET) is a medical imaging technique that measures the three-dimensional distribution of molecules marked by a positron-emitting particle. PET has grown significantly in clinical fields, particularly in oncology for diagnosis and therapeutic follow purposes. The technical evolutions of this technique are fast. Among the technical improvements, is the coupling of the PET scan with computed tomography (CT). PET is obtained by intravenous injection of a radioactive tracer. The marker is usually fluorine ( 18 F) embedded in a glucose molecule forming the 18-fluorodeoxyglucose (FDG-18). This tracer, similar to glucose, binds to tissues that consume large quantities of the sugar such cancerous tissue, cardiac muscle or brain. Detection using scintillation crystals (BGO, LSO, LYSO) suitable for high energy (511keV) recognizes the lines of the gamma photons originating from the annihilation of a positron with an electron. The electronics of detection or coincidence circuit is based on two criteria: a time window, of about 6 to 15 ns, and an energy window. This system measures the true coincidences that correspond to the detection of two photons of 511 kV from the same annihilation. Most PET devices are constituted by a series of elementary detectors distributed annularly around the patient. Each detector comprises a scintillation crystal matrix coupled to a finite number (4 or 6) of photomultipliers. The electronic circuit, or the coincidence circuit, determines the projection point of annihilation by means of two elementary detectors. The processing of such information must be extremely fast, considering the count rates encountered in practice. The information measured by the coincidence circuit is then positioned in a matrix or sinogram, which contains a set of elements of a projection section of the object. Images are obtained by tomographic reconstruction by powerful computer stations equipped with a software tools allowing the analysis and

  9. Reviewing PSA-based analyses to modify technical specifications at nuclear power plants

    International Nuclear Information System (INIS)

    Samanta, P.K.; Martinez-Guridi, G.; Vesely, W.E.

    1995-12-01

    Changes to Technical Specifications (TSs) at nuclear power plants (NPPs) require review and approval by the United States Nuclear Regulatory Commission (USNRC). Currently, many requests for changes to TSs use analyses that are based on a plant's probabilistic safety assessment (PSA). This report presents an approach to reviewing such PSA-based submittals for changes to TSs. We discuss the basic objectives of reviewing a PSA-based submittal to modify NPP TSs; the methodology of reviewing a TS submittal, and the differing roles of a PSA review, a PSA Computer Code review, and a review of a TS submittal. To illustrate this approach, we discuss our review of changes to allowed outage time (AOT) and surveillance test interval (STI) in the TS for the South Texas Project Nuclear Generating Station. Based on this experience gained, a check-list of items is given for future reviewers; it can be used to verify that the submittal contains sufficient information, and also that the review has addressed the relevant issues. Finally, recommended steps in the review process and the expected findings of each step are discussed

  10. Performance Analyses of Renewable and Fuel Power Supply Systems for Different Base Station Sites

    Directory of Open Access Journals (Sweden)

    Josip Lorincz

    2014-11-01

    Full Text Available Base station sites (BSSs powered with renewable energy sources have gained the attention of cellular operators during the last few years. This is because such “green” BSSs impose significant reductions in the operational expenditures (OPEX of telecom operators due to the possibility of on-site renewable energy harvesting. In this paper, the green BSSs power supply system parameters detected through remote and centralized real time sensing are presented. An implemented sensing system based on a wireless sensor network enables reliable collection and post-processing analyses of many parameters, such as: total charging/discharging current of power supply system, battery voltage and temperature, wind speed, etc. As an example, yearly sensing results for three different BSS configurations powered by solar and/or wind energy are discussed in terms of renewable energy supply (RES system performance. In the case of powering those BSS with standalone systems based on a fuel generator, the fuel consumption models expressing interdependence among the generator load and fuel consumption are proposed. This has allowed energy-efficiency comparison of the fuel powered and RES systems, which is presented in terms of the OPEX and carbon dioxide (CO2 reductions. Additionally, approaches based on different BSS air-conditioning systems and the on/off regulation of a daily fuel generator activity are proposed and validated in terms of energy and capital expenditure (CAPEX savings.

  11. Emission spectral analysis of nickel-base superalloys with fixed time intergration technique

    International Nuclear Information System (INIS)

    Okochi, Haruno; Takahashi, Katsuyuki; Suzuki, Shunichi; Sudo, Emiko

    1980-01-01

    Simultaneous determination of multielements (C, B, Mo, Ta, Co, Fe, Mn, Cr, Nb, Cu, Ti, Zr, and Al) in nickel-base superalloys (Ni: 68 -- 76%) was performed by emission spectral analysis. At first, samples which had various nickel contents (ni: 68 -- 76%) were prepared by using JAERI R9, nickel and other metals (Fe, Co, or Cr). It was confirmed that in the internal standard method (Ni II 227.73 nm), analytical values of all the elements examined decreased with a decrease of the integration time (ca. 3.9 -- 4.6 s), that is, an increase of the nickel content. On the other hand, according to the fixed time integration method, elements except for C, Mo, and Cr were not interfered within the range of nickel contents examined. A series of nickel-base binary alloys (Al, Si, Ti, Cr, Mn, Fe, Co, Nb, Mo, and W series) were prepared by high frequency induction melting and the centrifugal casting method and formulae for correcting interferences with near spectral lines were obtained. Various synthetic samples were prepared and analysed by this method. The equations of calibration curves were derived from the data for standard samples (JAERI R1 -- R6, NBS 1189, 1203 -- 1205, and B.S. 600B) by curve fitting with orthogonal polynomials using a computer. For the assessment of this method studied, the F-test was performed by comparison of variances of both analytical values of standard and synthetic samples. The surfaces of specimens were polished with a belt grinder using No. 80 of alumina or silicon carbide endless-paper. The preburn period and integration one were decided at 5 and 6 s respectively. A few standard samples which gave worse reproducibility in emission spectral analysis was investigated with an optical microscope and an electron probe X-ray microanalyser. (author)

  12. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    Science.gov (United States)

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  13. Linking project-based mechanisms with domestic greenhouse gas emissions trading schemes

    International Nuclear Information System (INIS)

    Bygrave, S.; Bosi, M.

    2004-01-01

    Although there are a number of possible links between emission trading and project-based mechanisms, the focus of this paper is on linking domestic GHG emission trading schemes with: (1) domestic; and, (2) international (JI and CDM) GHG reduction project activities. The objective is to examine some of the challenges in linking DETs and project-based mechanisms, as well as some possible solutions to address these challenges. The link between JI / CDM and intergovernmental international emissions trading (i.e. Article 17 of the Kyoto Protocol) is defined by the Kyoto Protocol, and therefore is not covered in this paper. The paper is written in the context of: (a) countries adhering to the Kyoto Protocol and elaborating their strategies to meet their GHG emission commitments, including through the use of the emissions trading and project-based mechanisms. For example, the European Union (EU) will be commencing a GHG Emissions Trading Scheme in January 2005, and recently, the Council of ministers and the European Parliament agreed on a text for an EU Linking Directive allowing the use of JI and CDM emission units in the EU Emission Trading Scheme (EU-ETS); and (b) all countries (and/or regions within countries) with GHG emission obligations that may choose to use domestic emissions trading and project-based mechanisms to meet their GHG commitments. The paper includes the following elements: (1) an overview of the different flexibility mechanisms (i.e. GHG emissions trading and PBMs), including a brief description and comparisons between the mechanisms (Section 3); (2) an exploration of the issues that emerge when project-based mechanisms link with domestic emissions trading schemes, as well as possible solutions to address some of the challenges raised (Section 4); (3) a case study examining the EU-ETS and the EU Linking Directive on project-based mechanisms, in particular on how the EU is addressing in a practical context relevant linking issues (Section 5); (4) a

  14. Perspectives on greenhouse gas emission estimates based on Australian wastewater treatment plant operating data.

    Science.gov (United States)

    de Haas, D W; Pepperell, C; Foley, J

    2014-01-01

    Primary operating data were collected from forty-six wastewater treatment plants (WWTPs) located across three states within Australia. The size range of plants was indicatively from 500 to 900,000 person equivalents. Direct and indirect greenhouse gas emissions were calculated using a mass balance approach and default emission factors, based on Australia's National Greenhouse Energy Reporting (NGER) scheme and IPCC guidelines. A Monte Carlo-type combined uncertainty analysis was applied to the some of the key emission factors in order to study sensitivity. The results suggest that Scope 2 (indirect emissions due to electrical power purchased from the grid) dominate the emissions profile for most of the plants (indicatively half to three quarters of the average estimated total emissions). This is only offset for the relatively small number of plants (in this study) that have significant on-site power generation from biogas, or where the water utility purchases grid electricity generated from renewable sources. For plants with anaerobic digestion, inventory data issues around theoretical biogas generation, capture and measurement were sometimes encountered that can skew reportable emissions using the NGER methodology. Typically, nitrous oxide (N(2)O) emissions dominated the Scope 1 (direct) emissions. However, N(2)O still only accounted for approximately 10 to 37% of total emissions. This conservative estimate is based on the 'default' NGER steady-state emission factor, which amounts to 1% of nitrogen removed through biological nitrification-denitrification processing in the plant (or indicatively 0.7 to 0.8% of plant influent total nitrogen). Current research suggests that true N(2)O emissions may be much lower and certainly not steady-state. The results of this study help to place in context research work that is focused on direct emissions from WWTPs (including N(2)O, methane and carbon dioxide of non-biogenic origin). For example, whereas non-biogenic CO(2

  15. Exergy-based assessment for waste gas emissions from Chinese transportation

    International Nuclear Information System (INIS)

    Ji Xi; Chen, G.Q.; Chen, B.; Jiang, M.M.

    2009-01-01

    As an effective measure for environmental impact associated with the waste emissions, exergy is used to unify the assessment of the waste gases of CO, NO x , and SO 2 emitted from fossil fuel consumption by the transportation system in China. An index of emission exergy intensity defined as the ratio of the total chemical exergy of the emissions and the total converted turnover of the transportation is proposed to quantify the environmental impact per unit of traffic service. Time series analyses are presented for the emission exergy and emission exergy intensity of the whole Chinese transportation as well as for its four sectors of highways, railways, waterways and civil aviation from 1978 to 2004. For the increasing emission exergy with CO taking the largest share, the highways sector was the major contributor, while the railways sector initially standing as the second main contributor developed into the least after 1995. The temporal and structural variations of the emissions are illustrated against the transition of the transportation system in a socio-economic perspective, with emphasis on policy-making implications.

  16. High-spatiotemporal-resolution ship emission inventory of China based on AIS data in 2014.

    Science.gov (United States)

    Chen, Dongsheng; Wang, Xiaotong; Li, Yue; Lang, Jianlei; Zhou, Ying; Guo, Xiurui; Zhao, Yuehua

    2017-12-31

    Ship exhaust emissions have been considered a significant source of air pollution, with adverse impacts on the global climate and human health. China, as one of the largest shipping countries, has long been in great need of in-depth analysis of ship emissions. This study for the first time developed a comprehensive national-scale ship emission inventory with 0.005°×0.005° resolution in China for 2014, using the bottom-up method based on Automatic Identification System (AIS) data of the full year of 2014. The emission estimation involved 166,546 unique vessels observed from over 15billion AIS reports, covering OGVs (ocean-going vessels), CVs (coastal vessels) and RVs (river vessels). Results show that the total estimated ship emissions for China in 2014 were 1.1937×10 6 t (SO 2 ), 2.2084×10 6 t (NO X ), 1.807×10 5 t (PM 10 ), 1.665×10 5 t (PM 2.5 ), 1.116×10 5 t (HC), 2.419×10 5 t (CO), and 7.843×10 7 t (CO 2 , excluding RVs), respectively. OGVs were the main emission contributors, with proportions of 47%-74% of the emission totals for different species. Vessel type with the most emissions was container (~43.6%), followed by bulk carrier (~17.5%), oil tanker (~5.7%) and fishing ship (~4.9%). Monthly variations showed that emissions from transport vessels had a low point in February, while fishing ship presented two emission peaks in May and September. In terms of port clusters, ship emissions in BSA (Bohai Sea Area), YRD (Yangtze River Delta) and PRD (Pearl River Delta) accounted for ~13%, ~28% and ~17%, respectively, of the total emissions in China. On the contrast, the average emission intensities in PRD were the highest, followed by the YRD and BSA regions. The establishment of this high-spatiotemporal-resolution ship emission inventory fills the gap of national-scale ship emission inventory of China, and the corresponding ship emission characteristics are expected to provide certain reference significance for the management and control of the ship

  17. Wavelet Based Characterization of Low Radio Frequency Solar Emissions

    Science.gov (United States)

    Suresh, A.; Sharma, R.; Das, S. B.; Oberoi, D.; Pankratius, V.; Lonsdale, C.

    2016-12-01

    Low-frequency solar radio observations with the Murchison Widefield Array (MWA) have revealed the presence of numerous short-lived, narrow-band weak radio features, even during quiet solar conditions. In their appearance in in the frequency-time plane, they come closest to the solar type III bursts, but with much shorter spectral spans and flux densities, so much so that they are not detectable with the usual swept frequency radio spectrographs. These features occur at rates of many thousand features per hour in the 30.72 MHz MWA bandwidth, and hence necessarily require an automated approach to determine robust statistical estimates of their properties, e.g., distributions of spectral widths, temporal spans, flux densities, slopes in the time-frequency plane and distribution over frequency. To achieve this, a wavelet decomposition approach has been developed for feature recognition and subsequent parameter extraction from the MWA dynamic spectrum. This work builds on earlier work by the members of this team to achieve a reliable flux calibration in a computationally efficient manner. Preliminary results show that the distribution of spectral span of these features peaks around 3 MHz, most of them last for less than two seconds and are characterized by flux densities of about 60% of the background solar emission. In analogy with the solar type III bursts, this non-thermal emission is envisaged to arise via coherent emission processes. There is also an exciting possibility that these features might correspond to radio signatures of nanoflares, hypothesized (Gold, 1964; Parker, 1972) to explain coronal heating.

  18. Chemometrical characterization of four italian rice varieties based on genetic and chemical analyses.

    Science.gov (United States)

    Brandolini, Vincenzo; Coïsson, Jean Daniel; Tedeschi, Paola; Barile, Daniela; Cereti, Elisabetta; Maietti, Annalisa; Vecchiati, Giorgio; Martelli, Aldo; Arlorio, Marco

    2006-12-27

    This paper describes a method for achieving qualitative identification of four rice varieties from two different Italian regions. To estimate the presence of genetic diversity among the four rice varieties, we used polymerase chain reaction-randomly amplified polymorphic DNA (PCR-RAPD) markers, and to elucidate whether a relationship exists between the ground and the specific characteristics of the product, we studied proximate composition, fatty acid composition, mineral content, and total antioxidant capacity. Using principal component analysis on genomic and compositional data, we were able to classify rice samples according to their variety and their district of production. This work also examined the discrimination ability of different parameters. It was found that genomic data give the best discrimination based on varieties, indicating that RAPD assays could be useful in discriminating among closely related species, while compositional analyses do not depend on the genetic characters only but are related to the production area.

  19. Stress and deflection analyses of floating roofs based on a load-modifying method

    International Nuclear Information System (INIS)

    Sun Xiushan; Liu Yinghua; Wang Jianbin; Cen Zhangzhi

    2008-01-01

    This paper proposes a load-modifying method for the stress and deflection analyses of floating roofs used in cylindrical oil storage tanks. The formulations of loads and deformations are derived according to the equilibrium analysis of floating roofs. Based on these formulations, the load-modifying method is developed to conduct a geometrically nonlinear analysis of floating roofs with the finite element (FE) simulation. In the procedure with the load-modifying method, the analysis is carried out through a series of iterative computations until a convergence is achieved within the error tolerance. Numerical examples are given to demonstrate the validity and reliability of the proposed method, which provides an effective and practical numerical solution to the design and analysis of floating roofs

  20. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    International Nuclear Information System (INIS)

    Cho, Sung Gook; Joe, Yang Hee

    2005-01-01

    By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities

  1. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Gook [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)]. E-mail: sgcho@incheon.ac.kr; Joe, Yang Hee [Department of Civil and Environmental System Engineering, University of Incheon, 177 Dohwa-dong, Nam-gu, Incheon 402-749 (Korea, Republic of)

    2005-08-01

    By nature, the seismic fragility analysis results will be considerably affected by the statistical data of design information and site-dependent ground motions. The engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are analyzed in this paper. An improved method of seismic fragility analysis is evaluated by comparative analyses to verify its efficiency for practical application to nuclear power plant structures. The effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures are also evaluated from the comparative studies. Observing the obtained results, the proposed method is more efficient for the multi-modes structures. The case study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities.

  2. OVERVIEW OF ADVANCED PETROLEUM-BASED FUELS-DIESEL EMISSIONS CONTROL PROGRAM (APBF-DEC)

    Energy Technology Data Exchange (ETDEWEB)

    Sverdrup, George M.

    2000-08-20

    The Advanced Petroleum-Based Fuels-Diesel Emissions Control Program (APBF-DEC) began in February 2000 and is supported by government agencies and industry. The purpose of the APBF-DEC program is to identify and evaluate the optimal combinations of fuels, lubricants, diesel engines, and emission control systems to meet the projected emission standards for the 2000 to 2010 time period. APBF-DEC is an outgrowth of the earlier Diesel Emission Control-Sulfur Effects Program (DECSE), whose objective is to determine the impact of the sulfur levels in fuel on emission control systems that could lower the emissions of NOx and particulate matter (PM) from diesel powered vehicles in the 2002 to 2004 period. Results from the DECSE studies of two emission control technologies-diesel particle filter (DPF) and NOx adsorber-will be used in the APBF-DEC program. These data are expected to provide initial information on emission control technology options and the effects of fuel properties (including additives) on the performance of emission control systems.

  3. Fabrication and performance analysis of MEMS-based Variable Emissivity Radiator for Space Applications

    International Nuclear Information System (INIS)

    Lee, Changwook; Oh, Hyung-Ung; Kim, Taegyu

    2014-01-01

    All Louver was typically representative as the thermal control device. The louver was not suitable to be applied to small satellite, because it has the disadvantage of increase in weight and volume. So MEMS-based variable radiator was developed to support the disadvantage of the louver MEMS-based variable emissivity radiator was designed for satellite thermal control. Because of its immediate response and low power consumption. Also MEMS- based variable emissivity radiator has been made smaller by using MEMS process, it could be solved the problem of the increase in weight and volume, and it has a high reliability and immediate response by using electrical control. In this study, operation validation of the MEMS radiator had been carried out, resulting that emissivity could be controlled. Numerical model was also designed to predict the thermal control performance of MEMS-based variable emissivity radiator

  4. Secondary Data Analyses of Subjective Outcome Evaluation Data Based on Nine Databases

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2012-01-01

    Full Text Available The purpose of this study was to evaluate the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong by analyzing 1,327 school-based program reports submitted by program implementers. In each report, program implementers were invited to write down five conclusions based on an integration of the subjective outcome evaluation data collected from the program participants and program implementers. Secondary data analyses were carried out by aggregating nine databases, with 14,390 meaningful units extracted from 6,618 conclusions. Results showed that most of the conclusions were positive in nature. The findings generally showed that the workers perceived the program and program implementers to be positive, and they also pointed out that the program could promote holistic development of the program participants in societal, familial, interpersonal, and personal aspects. However, difficulties encountered during program implementation (2.15% and recommendations for improvement were also reported (16.26%. In conjunction with the evaluation findings based on other strategies, the present study suggests that the Tier 1 Program of the Project P.A.T.H.S. is beneficial to the holistic development of the program participants.

  5. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  6. A consumption-based, regional input-output analysis of greenhouse gas emissions and the carbon regional index

    DEFF Research Database (Denmark)

    Boyd, Britta; Mangalagiu, Diana; Straatman, Bas

    2018-01-01

    This paper presents a consumption-based method accounting for greenhouse gas emissions at regional level based on a multi-region input-output model. The method is based on regional consumption and includes imports and exports of emissions, factual emission developments, green investments as well...

  7. Estimating emissions on vehicular traffic based on projected energy and transport demand on rural roads: Policies for reducing air pollutant emissions and energy consumption

    International Nuclear Information System (INIS)

    Ozan, Cenk; Haldenbilen, Soner; Ceylan, Halim

    2011-01-01

    This study deals with the estimation of emissions caused by vehicular traffic based on transport demand and energy consumption. Projected transport demand is calculated with Genetic Algorithm (GA) using population, gross domestic product per capita (GDPPC) and the number of vehicles. The energy consumption is modelled with the GA using the veh-km. The model age of the vehicles and their corresponding share for each year using the reference years is obtained. The pollutant emissions are calculated with estimated transport and energy demand. All the calculations are made in line to meet the European standards. For this purpose, two cases are composed. Case 1: Emissions based on energy consumption, and Case 2: Emissions based on transport demand. The both cases are compared. Three policies are proposed to control demand and the emissions. The policies provided the best results in terms of minimum emissions and the reasonable share of highway and railway mode as 70% and 30% usage for policy I, respectively. The emission calculation procedure presented in this study would provide an alternative way to make policies when there is no adequate data on emission measurement in developing countries. - Research highlights: → Emissions caused by vehicular traffic are modelled. → The pollutant emissions are calculated with estimated transport and energy demand. → All the calculations are made in line with to meet the European standards. → The calculation procedure will provide an alternative way to make policies. → The procedure will help planners to convince politicians to impose policies.

  8. Analysis of iron-base alloys by low-wattage glow discharge emission spectrometry

    International Nuclear Information System (INIS)

    Wagatsuma, K.; Hirokawa, K.

    1984-01-01

    Several iron-base alloys were investigated by low-wattage glow discharge emission spectrometry. The emission intensity principally depended on the sputtering parameters of constituent elements in the alloy. However, in the case of chromium, stable and firm oxides formed on the surface influencing the yield of ejected atoms. This paper discusses the relation between the sputtering parameters in Fe-Ni, Fe-Cr, and Fe-Co alloys and their relative emission intensities. Additionally, quantitative analysis was performed for some ternary iron-base alloys and commercial stainless steels with the calibration factors of binary alloy systems

  9. Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty

    Science.gov (United States)

    Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.

    2013-12-01

    Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.

  10. Internalizing carbon costs in electricity markets: Using certificates in a load-based emissions trading scheme

    International Nuclear Information System (INIS)

    Gillenwater, Michael; Breidenich, Clare

    2009-01-01

    Several western states have considered developing a regulatory approach to reduce greenhouse gas (GHG) emissions from the electric power industry, referred to as a load-based (LB) cap-and-trade scheme. A LB approach differs from the traditional source-based (SB) cap-and-trade approach in that the emission reduction obligation is placed upon Load Serving Entities (LSEs), rather than electric generators. The LB approach can potentially reduce the problem of emissions leakage, relative to a SB system. For any of these proposed LB schemes to be effective, they must be compatible with modern, and increasingly competitive, wholesale electricity markets. LSE's are unlikely to know the emissions associated with their power purchases. Therefore, a key challenge for a LB scheme is how to assign emissions to each LSE. This paper discusses the problems with one model for assigning emissions under a LB scheme and proposes an alternative, using unbundled Generation Emission Attribute Certificates. By providing a mechanism to internalize an emissions price signal at the generator dispatch level, the tradable certificate model addresses both these problems and provides incentives identical to a SB scheme

  11. [Research on the method of copper converting process determination based on emission spectrum analysis].

    Science.gov (United States)

    Li, Xian-xin; Liu, Wen-qing; Zhang, Yu-jun; Si, Fu-qi; Dou, Ke; Wang, Feng-ping; Huang, Shu-hua; Fang, Wu; Wang, Wei-qiang; Huang, Yong-feng

    2012-05-01

    A method of copper converting process determination based on PbO/PbS emission spectrum analysis was described. According to the known emission spectrum of gas molecules, the existence of PbO and PbS was confirmed in the measured spectrum. Through the field experiment it was determined that the main emission spectrum of the slag stage was from PbS, and the main emission spectrum of the copper stage was from PbO. The relative changes in PbO/PbS emission spectrum provide the method of copper converting process determination. Through using the relative intensity in PbO/PbS emission spectrum the copper smelting process can be divided into two different stages, i.e., the slag stage (S phase) and the copper stage (B phase). In a complete copper smelting cycle, a receiving telescope of appropriate view angle aiming at the converter flame, after noise filtering on the PbO/PbS emission spectrum, the process determination agrees with the actual production. Both the theory and experiment prove that the method of copper converting process determination based on emission spectrum analysis is feasible.

  12. Evaluation of methane emissions from West Siberian wetlands based on inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H-S; Inoue, G [Research Institute for Humanity and Nature, 457-4 Motoyama, Kamigamo, Kita-ku, Kyoto 603-8047 (Japan); Maksyutov, S; Machida, T [National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 (Japan); Glagolev, M V [Lomonosov Moscow State University, GSP-1, Leninskie Gory, Moscow 119991 (Russian Federation); Patra, P K [Research Institute for Global Change/JAMSTEC, 3173-25 Showa-cho, Kanazawa-ku, Yokohama, Kanagawa 236-0001 (Japan); Sudo, K, E-mail: heonsook.kim@gmail.com [Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)

    2011-07-15

    West Siberia contains the largest extent of wetlands in the world, including large peat deposits; the wetland area is equivalent to 27% of the total area of West Siberia. This study used inverse modeling to refine emissions estimates for West Siberia using atmospheric CH{sub 4} observations and two wetland CH{sub 4} emissions inventories: (1) the global wetland emissions dataset of the NASA Goddard Institute for Space Studies (the GISS inventory), which includes emission seasons and emission rates based on climatology of monthly surface air temperature and precipitation, and (2) the West Siberian wetland emissions data (the Bc7 inventory), based on in situ flux measurements and a detailed wetland classification. The two inversions using the GISS and Bc7 inventories estimated annual mean flux from West Siberian wetlands to be 2.9 {+-} 1.7 and 3.0 {+-} 1.4 Tg yr{sup -1}, respectively, which are lower than the 6.3 Tg yr{sup -1} predicted in the GISS inventory, but similar to those of the Bc7 inventory (3.2 Tg yr{sup -1}). The well-constrained monthly fluxes and a comparison between the predicted CH{sub 4} concentrations in the two inversions suggest that the Bc7 inventory predicts the seasonal cycle of West Siberian wetland CH{sub 4} emissions more reasonably, indicating that the GISS inventory predicts more emissions from wetlands in northern and middle taiga.

  13. Upgradation of an Apple IIe based DC arc atomic emission spectrometer to a PC based system

    International Nuclear Information System (INIS)

    Sampathkumar, R.; Ravindranath, S.V.G.; Patil, P.B.; Deshpande, S.S.; Saha, T.K.; Handu, V.K.

    2004-01-01

    The analysis of Uranium metal and its compounds used as reactor fuel for the presence of impurities especially Cd and B which have a high neutron capture cross section is routinely performed in Spectroscopy Division. The DC Arc Atomic Emission Spectrometer in the Division was employing an Apple IIe computer for performing the control and data acquisition jobs. The system was upgraded to a PC based data acquisition system and the necessary software to perform the spectro chemical analysis has been developed. This becomes necessary in a scenario where the commercially available Atomic Emission Spectrometers are no longer equipped with DC arc source. Also the Apple IIe computer which was performing the control and data acquisition has gone obsolete and its spares are no longer available. Therefore, to derive the benefits of using DC arc as excitation source the system was upgraded to a PC based system. This paper describes the upgraded system and the various software features relating to the mode of data acquisition, method of analysis, data processing etc. implemented as required by the analysts. (author)

  14. Updated analysis of Denmark's possibilities of reducing NO{sub X} emissions; En opdateret analyse af Danmarks muligheder for at reducere emissionerne af NOx

    Energy Technology Data Exchange (ETDEWEB)

    COWI A/S, Kgs. Lyngby (Denmark)

    2009-07-01

    The update of the measures included in the 2006 analysis has given the following key results: 1) A number of measures such as boosting and reburning on power stations and other large point sources are no longer considered as relevant measures. 2) Minor revisions and adjustments have been implemented for measures in the industry, district heating sector, for mobile sources and for offshore. 3) Additional measures have been considered. This includes primarily the use of SNCR (Selective Non-Catalytic Reduction) and SCR (Selective Catalytic Reduction). Most sources of NO{sub x} emissions can be fitted with either of these abatement technologies. There is for example a potential by more frequent replacement of the catalytic elements in the SCR units. Also by increased ammonia dosing the reduction in SNCR units can be increased. These are relevant measures in waste incineration installations. The report includes rough estimates of reduction potentials and costs. The calculations show the costs and benefits of the relevant measures. The measures are ranked according to their shadow price with the damage costs of emissions of one kg NO{sub x}, being DKK 52 per kg NO{sub x}. The measures with a shadow price of less than the damage costs would give a welfare-economic surplus. This implies that the most cost-effective measures are 1) Better controls for gas engines at combined heat and power plants (CHP) 2) Optimisation of SNCR in waste incineration installations 3) Replacement to low-NO{sub x} burners at light oil fuel kettles in industry and CHP The measures in CHP and industry remove 3300 tonnes NO{sub x} in 2010. The measures imply a cost of DKK 3 million per year for the business sector and DKK 12 million per year for the government due to a loss in tax revenues. Moreover reductions can be expected from the measures within the waste incineration installations, but the exact potential has not been estimated here. A number of sensitivity analyses have been carried out

  15. Reduction of CO{sub 2} emission and oil dependency with biomass-based polygeneration

    Energy Technology Data Exchange (ETDEWEB)

    Joelsson, Jonas M; Gustavsson, Leif [Ecotechnology and Environmental Science, Department of Engineering and Sustainable Development, Mid Sweden University, SE-831 25 Oestersund (Sweden)

    2010-07-15

    We compare different options for the use of lignocellulosic biomass to reduce CO{sub 2} emission and oil use, focusing on polygeneration of biomass-based motor fuels and electricity, and discuss methodological issues related to such comparisons. The use of biomass can significantly reduce CO{sub 2} emission and oil use, but there is a trade-off between the reductions in CO{sub 2} emission and oil use. Bioelectricity from stand-alone plants replacing coal-based electricity reduced CO{sub 2} emission by 99 kg per GJ biomass input but gave no oil use reduction. Stand-alone produced methanol replacing diesel reduced the CO{sub 2} emission with 38 kg and the oil use with 0.67 GJ per GJ biomass, indicating that a potential CO{sub 2} emission reduction of 90 kg is lost per GJ oil reduced. CO{sub 2} emission and oil use reduction for alternatives co-producing fuel and electricity fall between the stand-alone alternatives. Plug-in hybrid-electric vehicles using bioelectricity reduced CO{sub 2} emission by 75-88 kg and oil use by 0.99-1.2 GJ, per GJ biomass input. Biomass can also reduce CO{sub 2} emission and/or oil use more efficiently if fossil-fuel-fired boilers or electric heating is replaced by district heating from biomass-based combined heat and power generation. This is also true if electricity or motor fuel is produced from black liquor gasification in pulp mills or if wood is used instead of concrete in building construction. Biomass gasification is an important technology to achieve large reductions, irrespective of whether CO{sub 2} emission or oil use reduction is prioritised. (author)

  16. Validation of a fully autonomous phosphate analyser based on a microfluidic lab-on-a-chip

    DEFF Research Database (Denmark)

    Slater, Conor; Cleary, J.; Lau, K.T.

    2010-01-01

    of long-term operation. This was proven by a bench top calibration of the analyser using standard solutions and also by comparing the analyser's performance to a commercially available phosphate monitor installed at a waste water treatment plant. The output of the microfluidic lab-on-a-chip analyser...

  17. A High Resolution Technology-based Emissions Inventory for Nepal: Present and Future Scenario

    Science.gov (United States)

    Sadavarte, P.; Das, B.; Rupakheti, M.; Byanju, R.; Bhave, P.

    2016-12-01

    A comprehensive regional assessment of emission sources is a major hindrance for a complete understanding of the air quality and for designing appropriate mitigation solutions in Nepal, a landlocked country in foothills of the Himalaya. This study attempts, for the first time, to develop a fine resolution (1km × 1km) present day emission inventory of Nepal with a higher tier approach using our understanding of the currently used technologies, energy consumption used in various energy sectors and its resultant emissions. We estimate present-day emissions of aerosols (BC, OC and PM2.5), trace gases (SO2, CO, NOX and VOC) and greenhouse gases (CO2, N2O and CH4) from non-open burning sources (residential, industry, transport, commercial) and open-burning sources (agriculture and municipal solid waste burning) for the base year 2013. We used methodologies published in literatures, and both primary and secondary data to estimate energy production and consumption in each sector and its sub-sector and associated emissions. Local practices and activity rates are explicitly accounted for energy consumption and dispersed often under-documented emission sources like brick manufacturing, diesel generator sets, mining, stone crushing, solid waste burning and diesel use in farms are considered. Apart from pyrogenic source of CH4 emissions, methanogenic and enteric fermentation sources are also accounted. Region-specific and newly measured country-specific emission factors are used for emission estimates. Activity based proxies are used for spatial and temporal distribution of emissions. Preliminary results suggest that 80% of national energy consumption is in residential sector followed by industry (8%) and transport (7%). More than 90% of the residential energy is supplied by biofuel which needs immediate attention to reduce emissions. Further, the emissions would be compared with other contemporary studies, regional and global datasets and used in the model simulations to

  18. Aging of plumes from emission sources based on chamber simulation

    Science.gov (United States)

    Wang, X.; Deng, W.; Fang, Z.; Bernard, F.; Zhang, Y.; Yu, J.; Mellouki, A.; George, C.

    2017-12-01

    Study on atmospheric aging of plumes from emission sources is essential to understand their contribution to both secondary and primary pollutants occurring in the ambient air. Here we directly introduced vehicle exhaust, biomass burning plume, industrial solvents and cooking plumes into a smog chamber with 30 m3 fluorinated ethylene propylene (FEP) Teflon film reactor housed in a temperature-controlled enclosure, for characterizing primarily emitted air pollutants and for investigating secondarily formed products during photo-oxidation. Moreover, we also initiated study on the formation of secondary aerosols when gasoline vehicle exhaust is mixed with typical coal combustion pollutant SO2 or typical agricultural-related pollutant NH3. Formation of secondary organic aerosols (SOA) from typical solvent toluene was also investigated in ambient air matrix in comparison with purified air matrix. Main findings include: 1) Except for exhaust from idling gasoline vehicles, traditional precursor volatile organic compounds could only explain a very small fraction of SOA formed from vehicle exhaust, biomass burning or cooking plumes, suggesting knowledge gap in SOA precursors; 2) There is the need to re-think vehicle emission standards with a combined primary and/or secondary contribution of vehicle exhaust to PM2.5 or other secondary pollutants such as ozone; 3) When mixed with SO2, the gasoline vehicle exhaust revealed an increase of SOA production factor by 60-200% and meanwhile SO2 oxidation rates increased about a factor of 2.7; when the aged gasoline vehicle exhaust were mixing with NH3, both particle number and mass concentrations were increasing explosively. These phenomenons implied the complex interaction during aging of co-existing source emissions. 4) For typical combination of "tolune+SO2+NOx", when compared to chamber simulation with purified air as matrix, both SOA formation and SO2 oxidation were greatly enhanced under ambient air matrix, and the enhancement

  19. Genome based analyses of six hexacorallian species reject the “naked coral” hypothesis

    KAUST Repository

    Wang, Xin

    2017-09-23

    Scleractinian corals are the foundation species of the coral-reef ecosystem. Their calcium carbonate skeletons form extensive structures that are home to millions of species, making coral reefs one of the most diverse ecosystems of our planet. However, our understanding of how reef-building corals have evolved the ability to calcify and become the ecosystem builders they are today is hampered by uncertain relationships within their subclass Hexacorallia. Corallimorpharians have been proposed to originate from a complex scleractinian ancestor that lost the ability to calcify in response to increasing ocean acidification, suggesting the possibility for corals to lose and gain the ability to calcify in response to increasing ocean acidification. Here we employed a phylogenomic approach using whole-genome data from six hexacorallian species to resolve the evolutionary relationship between reef-building corals and their non-calcifying relatives. Phylogenetic analysis based on 1,421 single-copy orthologs, as well as gene presence/absence and synteny information, converged on the same topologies, showing strong support for scleractinian monophyly and a corallimorpharian sister clade. Our broad phylogenomic approach using sequence-based and sequence-independent analyses provides unambiguous evidence for the monophyly of scleractinian corals and the rejection of corallimorpharians as descendants of a complex coral ancestor.

  20. Genome based analyses of six hexacorallian species reject the “naked coral” hypothesis

    KAUST Repository

    Wang, Xin; Drillon, Gué nola; Ryu, Taewoo; Voolstra, Christian R.; Aranda, Manuel

    2017-01-01

    Scleractinian corals are the foundation species of the coral-reef ecosystem. Their calcium carbonate skeletons form extensive structures that are home to millions of species, making coral reefs one of the most diverse ecosystems of our planet. However, our understanding of how reef-building corals have evolved the ability to calcify and become the ecosystem builders they are today is hampered by uncertain relationships within their subclass Hexacorallia. Corallimorpharians have been proposed to originate from a complex scleractinian ancestor that lost the ability to calcify in response to increasing ocean acidification, suggesting the possibility for corals to lose and gain the ability to calcify in response to increasing ocean acidification. Here we employed a phylogenomic approach using whole-genome data from six hexacorallian species to resolve the evolutionary relationship between reef-building corals and their non-calcifying relatives. Phylogenetic analysis based on 1,421 single-copy orthologs, as well as gene presence/absence and synteny information, converged on the same topologies, showing strong support for scleractinian monophyly and a corallimorpharian sister clade. Our broad phylogenomic approach using sequence-based and sequence-independent analyses provides unambiguous evidence for the monophyly of scleractinian corals and the rejection of corallimorpharians as descendants of a complex coral ancestor.

  1. Quantitative Prediction of Coalbed Gas Content Based on Seismic Multiple-Attribute Analyses

    Directory of Open Access Journals (Sweden)

    Renfang Pan

    2015-09-01

    Full Text Available Accurate prediction of gas planar distribution is crucial to selection and development of new CBM exploration areas. Based on seismic attributes, well logging and testing data we found that seismic absorption attenuation, after eliminating the effects of burial depth, shows an evident correlation with CBM gas content; (positive structure curvature has a negative correlation with gas content; and density has a negative correlation with gas content. It is feasible to use the hydrocarbon index (P*G and pseudo-Poisson ratio attributes for detection of gas enrichment zones. Based on seismic multiple-attribute analyses, a multiple linear regression equation was established between the seismic attributes and gas content at the drilling wells. Application of this equation to the seismic attributes at locations other than the drilling wells yielded a quantitative prediction of planar gas distribution. Prediction calculations were performed for two different models, one using pre-stack inversion and the other one disregarding pre-stack inversion. A comparison of the results indicates that both models predicted a similar trend for gas content distribution, except that the model using pre-stack inversion yielded a prediction result with considerably higher precision than the other model.

  2. Risk-based analyses in support of California hazardous site remediation

    International Nuclear Information System (INIS)

    Ringland, J.T.

    1995-08-01

    The California Environmental Enterprise (CEE) is a joint program of the Department of Energy (DOE), Lawrence Livermore National Laboratory, Lawrence Berkeley Laboratory, and Sandia National Laboratories. Its goal is to make DOE laboratory expertise accessible to hazardous site cleanups in the state This support might involve working directly with parties responsible for individual cleanups or it might involve working with the California Environmental Protection Agency to develop tools that would be applicable across a broad range of sites. As part of its initial year's activities, the CEE supported a review to examine where laboratory risk and risk-based systems analysis capabilities might be most effectively applied. To this end, this study draws the following observations. The labs have a clear role in analyses supporting the demonstration and transfer of laboratory characterization or remediation technologies. The labs may have opportunities in developing broadly applicable analysis tools and computer codes for problems such as site characterization or efficient management of resources. Analysis at individual sites, separate from supporting lab technologies or prototyping general tools, may be appropriate only in limited circumstances. In any of these roles, the labs' capabilities extend beyond health risk assessment to the broader areas of risk management and risk-based systems analysis

  3. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    Science.gov (United States)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  4. Influence of metallic based fuel additives on performance and exhaust emissions of diesel engine

    Energy Technology Data Exchange (ETDEWEB)

    Keskin, Ali [Tarsus Technical Education Faculty, Mersin University, 33500 Mersin (Turkey); Guerue, Metin, E-mail: mguru@gazi.edu.t [Engineering and Architectural Faculty, Gazi University, 06570 Maltepe, Ankara (Turkey); Altiparmak, Duran [Technical Education Faculty, Gazi University, 06500 Ankara (Turkey)

    2011-01-15

    In this experimental study, influence of the metallic-based additives on fuel consumption and exhaust emissions of diesel engine were investigated. The metallic-based additives were produced by synthesizing of resin acid (abietic acid) with MnO{sub 2} or MgO. These additives were doped into diesel fuel at the rate of 8 {mu}mol/l and 16 {mu}mol/l for preparing test fuels. Both additives improved the properties of diesel fuel such as viscosity, flash point, cloud point and pour point. The fuels with and without additives were tested in a direct injection diesel engine at full load condition. Maximum reduction of specific fuel consumption was recorded as 4.16%. CO emission and smoke opacity decreased by 16.35% and by 29.82%, respectively. NO{sub x} emission was measured higher and CO{sub 2} emission was not changed considerably with the metallic-based additives.

  5. COLD START CHARACTERISTICS STUDY BASED ON REAL TIME NO EMISSIONS IN AN LPG SI ENGINE

    Directory of Open Access Journals (Sweden)

    Yingli Zu

    2010-01-01

    Full Text Available Normally, cylinder pressure was used as a criterion of combustion occurrence, while in some conditions, it may be unreliable when identifying lean mixture combustion. This is particularly important for fuels like liquefied petroleum gas, which has good capacity for lean combustion. In this study, a fast response NO detector, based on the chemiluminescence method, was used to measure real time NO emissions in order to evaluate the technique as a criterion for establishing combustion occurrence. Test results show that real time NO emissions can be used to identify the cylinder combustion and misfire occurrence during engine cranking, and real time NO emissions can be used to understand the combustion and misfire occurrence. Real time NO emissions mostly happened in first several cycles during cold start, and NO emissions increased with the spark timing advancing.

  6. Quantifying greenhouse gas emissions from coal fires using airborne and ground-based methods

    Science.gov (United States)

    Engle, Mark A.; Radke, Lawrence F.; Heffern, Edward L.; O'Keefe, Jennifer M.K.; Smeltzer, Charles; Hower, James C.; Hower, Judith M.; Prakash, Anupma; Kolker, Allan; Eatwell, Robert J.; ter Schure, Arnout; Queen, Gerald; Aggen, Kerry L.; Stracher, Glenn B.; Henke, Kevin R.; Olea, Ricardo A.; Román-Colón, Yomayara

    2011-01-01

    Coal fires occur in all coal-bearing regions of the world and number, conservatively, in the thousands. These fires emit a variety of compounds including greenhouse gases. However, the magnitude of the contribution of combustion gases from coal fires to the environment is highly uncertain, because adequate data and methods for assessing emissions are lacking. This study demonstrates the ability to estimate CO2 and CH4 emissions for the Welch Ranch coal fire, Powder River Basin, Wyoming, USA, using two independent methods: (a) heat flux calculated from aerial thermal infrared imaging (3.7–4.4 t d−1 of CO2 equivalent emissions) and (b) direct, ground-based measurements (7.3–9.5 t d−1 of CO2 equivalent emissions). Both approaches offer the potential for conducting inventories of coal fires to assess their gas emissions and to evaluate and prioritize fires for mitigation.

  7. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  8. Comprehensive logic based analyses of Toll-like receptor 4 signal transduction pathway.

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar Padwal

    Full Text Available Among the 13 TLRs in the vertebrate systems, only TLR4 utilizes both Myeloid differentiation factor 88 (MyD88 and Toll/Interleukin-1 receptor (TIR-domain-containing adapter interferon-β-inducing Factor (TRIF adaptors to transduce signals triggering host-protective immune responses. Earlier studies on the pathway combined various experimental data in the form of one comprehensive map of TLR signaling. But in the absence of adequate kinetic parameters quantitative mathematical models that reveal emerging systems level properties and dynamic inter-regulation among the kinases/phosphatases of the TLR4 network are not yet available. So, here we used reaction stoichiometry-based and parameter independent logical modeling formalism to build the TLR4 signaling network model that captured the feedback regulations, interdependencies between signaling kinases and phosphatases and the outcome of simulated infections. The analyses of the TLR4 signaling network revealed 360 feedback loops, 157 negative and 203 positive; of which, 334 loops had the phosphatase PP1 as an essential component. The network elements' interdependency (positive or negative dependencies in perturbation conditions such as the phosphatase knockout conditions revealed interdependencies between the dual-specific phosphatases MKP-1 and MKP-3 and the kinases in MAPK modules and the role of PP2A in the auto-regulation of Calmodulin kinase-II. Our simulations under the specific kinase or phosphatase gene-deficiency or inhibition conditions corroborated with several previously reported experimental data. The simulations to mimic Yersinia pestis and E. coli infections identified the key perturbation in the network and potential drug targets. Thus, our analyses of TLR4 signaling highlights the role of phosphatases as key regulatory factors in determining the global interdependencies among the network elements; uncovers novel signaling connections; identifies potential drug targets for

  9. Voxel-based morphometry analyses of in-vivo MRI in the aging mouse lemur primate

    Directory of Open Access Journals (Sweden)

    Stephen John Sawiak

    2014-05-01

    Full Text Available Cerebral atrophy is one of the most widely brain alterations associated to aging. A clear relationship has been established between age-associated cognitive impairments and cerebral atrophy. The mouse lemur (Microcebus murinus is a small primate used as a model of age-related neurodegenerative processes. It is the first nonhuman primate in which cerebral atrophy has been correlated with cognitive deficits. Previous studies of cerebral atrophy in this model were based on time consuming manual delineation or measurement of selected brain regions from magnetic resonance images (MRI. These measures could not be used to analyse regions that cannot be easily outlined such as the nucleus basalis of Meynert or the subiculum. In humans, morphometric assessment of structural changes with age is generally performed with automated procedures such as voxel-based morphometry (VBM. The objective of our work was to perform user-independent assessment of age-related morphological changes in the whole brain of large mouse lemur populations thanks to VBM. The study was based on the SPMMouse toolbox of SPM 8 and involved thirty mouse lemurs aged from 1.9 to 11.3 years. The automatic method revealed for the first time atrophy in regions where manual delineation is prohibitive (nucleus basalis of Meynert, subiculum, prepiriform cortex, Brodmann areas 13-16, hypothalamus, putamen, thalamus, corpus callosum. Some of these regions are described as particularly sensitive to age-associated alterations in humans. The method revealed also age-associated atrophy in cortical regions (cingulate, occipital, parietal, nucleus septalis, and the caudate. Manual measures performed in some of these regions were in good agreement with results from automatic measures. The templates generated in this study as well as the toolbox for SPM8 can be downloaded. These tools will be valuable for future evaluation of various treatments that are tested to modulate cerebral aging in lemurs.

  10. Correction of Measured Taxicab Exhaust Emission Data Based on Cmem Modle

    Science.gov (United States)

    Li, Q.; Jia, T.

    2017-09-01

    Carbon dioxide emissions from urban road traffic mainly come from automobile exhaust. However, the carbon dioxide emissions obtained by the instruments are unreliable due to time delay error. In order to improve the reliability of data, we propose a method to correct the measured vehicles' carbon dioxide emissions from instrument based on the CMEM model. Firstly, the synthetic time series of carbon dioxide emissions are simulated by CMEM model and GPS velocity data. Then, taking the simulation data as the control group, the time delay error of the measured carbon dioxide emissions can be estimated by the asynchronous correlation analysis, and the outliers can be automatically identified and corrected using the principle of DTW algorithm. Taking the taxi trajectory data of Wuhan as an example, the results show that (1) the correlation coefficient between the measured data and the control group data can be improved from 0.52 to 0.59 by mitigating the systematic time delay error. Furthermore, by adjusting the outliers which account for 4.73 % of the total data, the correlation coefficient can raise to 0.63, which suggests strong correlation. The construction of low carbon traffic has become the focus of the local government. In order to respond to the slogan of energy saving and emission reduction, the distribution of carbon emissions from motor vehicle exhaust emission was studied. So our corrected data can be used to make further air quality analysis.

  11. SVR-based prediction of carbon emissions from energy consumption in Henan Province

    Science.gov (United States)

    Gou, Guohua

    2018-02-01

    This paper analyzes the advantage of support vector regression (SVR) in the prediction of carbon emission and establishes the SVR-based carbon emission prediction model. The model is established using the data of Henan’s carbon emissions and influence factors from the 1991 to 2016 to train and test and then predict the carbon emissions from 2017 to 2021. The results show that: from the perspective of carbon emission from energy consumption, it raised 224.876 million tons of carbon dioxide from 1991 to 2016, and the predicted increment from 2017 to 2021 is 30.5563million tons with an average annual growth rate at 3%. From the perspective of growth rate among the six factors related to carbon emissions it is proved that population urbanization rate per capital GDP and energy consumption per unit of GDP influences the growth rate of carbon emissions less than the proportion of secondary industry and coal consumption ratio of carbon. Finally some suggestions are proposed for the carbon emission reduction of Henan Province.

  12. Evaluation of a rapid LMP-based approach for calculating marginal unit emissions

    International Nuclear Information System (INIS)

    Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.

    2013-01-01

    Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with

  13. Historic global biomass burning emissions for CMIP6 (BB4CMIP based on merging satellite observations with proxies and fire models (1750–2015

    Directory of Open Access Journals (Sweden)

    M. J. E. van Marle

    2017-09-01

    Full Text Available Fires have influenced atmospheric composition and climate since the rise of vascular plants, and satellite data have shown the overall global extent of fires. Our knowledge of historic fire emissions has progressively improved over the past decades due mostly to the development of new proxies and the improvement of fire models. Currently, there is a suite of proxies including sedimentary charcoal records, measurements of fire-emitted trace gases and black carbon stored in ice and firn, and visibility observations. These proxies provide opportunities to extrapolate emission estimates back in time based on satellite data starting in 1997, but each proxy has strengths and weaknesses regarding, for example, the spatial and temporal extents over which they are representative. We developed a new historic biomass burning emissions dataset starting in 1750 that merges the satellite record with several existing proxies and uses the average of six models from the Fire Model Intercomparison Project (FireMIP protocol to estimate emissions when the available proxies had limited coverage. According to our approach, global biomass burning emissions were relatively constant, with 10-year averages varying between 1.8 and 2.3 Pg C yr−1. Carbon emissions increased only slightly over the full time period and peaked during the 1990s after which they decreased gradually. There is substantial uncertainty in these estimates, and patterns varied depending on choices regarding data representation, especially on regional scales. The observed pattern in fire carbon emissions is for a large part driven by African fires, which accounted for 58 % of global fire carbon emissions. African fire emissions declined since about 1950 due to conversion of savanna to cropland, and this decrease is partially compensated for by increasing emissions in deforestation zones of South America and Asia. These global fire emission estimates are mostly suited for global analyses and

  14. CO{sub 2} emission from coal-based electricity generation in Germany; CO{sub 2}-Emissionen aus der Kohleverstromung in Deutschland

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, Hauke; Harthan, Ralph O.

    2014-03-10

    In 2013 the coal based electricity generation has increased, mainly because emission trade can actually not produce an adequate tax effect. From 10 coal-fired power plants in Germany nine use brown coal only one uses hard coal. Productivity analyses show that brown coal-fired plants have higher productivities than gas or hard coal fired power plants, but the CO{sub 2} emissions are significantly higher in case of brown coal. The oldest (older than 40 years) and least efficient brown coal fired power plants are operated in Nordrhein-Westfalen. Germany has committed itself to reduce CO{sub 2} emissions until 2020 by 40% compared to 1990. If this has to be generated by emission trading the prices would have to increase to more than 40 Euro/ton CO{sub 2} long before 2020. Otherwise administrative regulations would be necessary to reach the environmental goal.

  15. Benchmark-based emission allocation in a cap-and-trade system

    International Nuclear Information System (INIS)

    Groenenberg, H.; Blok, K.

    2002-01-01

    One of the important bottlenecks for the introduction of emission trading is how allowances should be distributed among the participants in a trading scheme. Both grandfathering on the basis of historic emissions and auctioning have important drawbacks. In this paper, we propose an allowance distribution rule based on bench-marking of production processes: each company's share in the total allowance is determined by its production level and a reference emission level per product. The scheme shows some important advantages compared to other schemes

  16. Anticorrelation between exciplex emission and photovoltaic efficiency in PPV polymer based solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Chunhong, Yin; Neher, Dieter [University of Potsdam, Institute of Physics, Am Neuen Palais 10, 14469 Potsdam (Germany); Kietzke, Thomas [University of Potsdam, Institute of Physics, Am Neuen Palais 10, 14469 Potsdam (Germany); nstitute of Materials Research and Engineering (IMRE), Research Link 3, 117602 Singapore (Singapore); Hoerhold, Hans-Heinrich [University of Jena, Institute of Organic Chemistry and Macromolecular Chemistry, Humboldtstr. 10, 07743 Jena (Germany)

    2007-07-01

    By studying the photoluminescence emission and photovoltaic properties of blends of PPV-based electron donating and accepting polymers, we observed a strict anticorrelation between the relative exciplex emission in the solid state and the photovoltaic efficiency of corresponding blend devices. Thermal annealing led to a decrease in exciplex emission accompanied by an increase in photovoltaic efficiency. Comparative studies on defined bi-layer geometries bilayer devices did not show any influence on the annealing step. Consequently, we conclude that the photocurrent is mainly determined by the efficiency to form free carriers rather than by the transport and free carrier recombination.

  17. Low Emissions and Delay Optimization for an Isolated Signalized Intersection Based on Vehicular Trajectories.

    Directory of Open Access Journals (Sweden)

    Ciyun Lin

    Full Text Available A traditional traffic signal control system is established based on vehicular delay, queue length, saturation and other indicators. However, due to the increasing severity of urban environmental pollution issues and the development of a resource-saving and environmentally friendly social philosophy, the development of low-carbon and energy-efficient urban transport is required. This paper first defines vehicular trajectories and the calculation of vehicular emissions based on VSP. Next, a regression analysis method is used to quantify the relationship between vehicular emissions and delay, and a traffic signal control model is established to reduce emissions and delay using the enumeration method combined with saturation constraints. Finally, one typical intersection of Changchun is selected to verify the model proposed in this paper; its performance efficiency is also compared using simulations in VISSIM. The results of this study show that the proposed model can significantly reduce vehicle delay and traffic emissions simultaneously.

  18. Low Emissions and Delay Optimization for an Isolated Signalized Intersection Based on Vehicular Trajectories.

    Science.gov (United States)

    Lin, Ciyun; Gong, Bowen; Qu, Xin

    2015-01-01

    A traditional traffic signal control system is established based on vehicular delay, queue length, saturation and other indicators. However, due to the increasing severity of urban environmental pollution issues and the development of a resource-saving and environmentally friendly social philosophy, the development of low-carbon and energy-efficient urban transport is required. This paper first defines vehicular trajectories and the calculation of vehicular emissions based on VSP. Next, a regression analysis method is used to quantify the relationship between vehicular emissions and delay, and a traffic signal control model is established to reduce emissions and delay using the enumeration method combined with saturation constraints. Finally, one typical intersection of Changchun is selected to verify the model proposed in this paper; its performance efficiency is also compared using simulations in VISSIM. The results of this study show that the proposed model can significantly reduce vehicle delay and traffic emissions simultaneously.

  19. Research on forecast technology of mine gas emission based on fuzzy data mining (FDM)

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chang-kai; Wang Yao-cai; Wang Jun-wei [CUMT, Xuzhou (China). School of Information and Electrical Engineering

    2004-07-01

    The safe production of coalmine can be further improved by forecasting the quantity of gas emission based on the real-time data and historical data which the gas monitoring system has saved. By making use of the advantages of data warehouse and data mining technology for processing large quantity of redundancy data, the method and its application of forecasting mine gas emission quantity based on FDM were studied. The constructing fuzzy resembling relation and clustering analysis were proposed, which the potential relationship inside the gas emission data may be found. The mode finds model and forecast model were presented, and the detailed approach to realize this forecast was also proposed, which have been applied to forecast the gas emission quantity efficiently.

  20. Ecology of Subglacial Lake Vostok (Antarctica, Based on Metagenomic/Metatranscriptomic Analyses of Accretion Ice

    Directory of Open Access Journals (Sweden)

    Tom D'Elia

    2013-03-01

    Full Text Available Lake Vostok is the largest of the nearly 400 subglacial Antarctic lakes and has been continuously buried by glacial ice for 15 million years. Extreme cold, heat (from possible hydrothermal activity, pressure (from the overriding glacier and dissolved oxygen (delivered by melting meteoric ice, in addition to limited nutrients and complete darkness, combine to produce one of the most extreme environments on Earth. Metagenomic/metatranscriptomic analyses of ice that accreted over a shallow embayment and over the southern main lake basin indicate the presence of thousands of species of organisms (94% Bacteria, 6% Eukarya, and two Archaea. The predominant bacterial sequences were closest to those from species of Firmicutes, Proteobacteria and Actinobacteria, while the predominant eukaryotic sequences were most similar to those from species of ascomycetous and basidiomycetous Fungi. Based on the sequence data, the lake appears to contain a mixture of autotrophs and heterotrophs capable of performing nitrogen fixation, nitrogen cycling, carbon fixation and nutrient recycling. Sequences closest to those of psychrophiles and thermophiles indicate a cold lake with possible hydrothermal activity. Sequences most similar to those from marine and aquatic species suggest the presence of marine and freshwater regions.

  1. Loss of Flow Accident (LOFA) analyses using LabView-based NRR simulator

    Energy Technology Data Exchange (ETDEWEB)

    Arafa, Amany Abdel Aziz; Saleh, Hassan Ibrahim [Atomic Energy Authority, Cairo (Egypt). Radiation Engineering Dept.; Ashoub, Nagieb [Atomic Energy Authority, Cairo (Egypt). Reactor Physics Dept.

    2016-12-15

    This paper presents a generic Loss of Flow Accident (LOFA) scenario module which is integrated in the LabView-based simulator to imitate a Nuclear Research Reactor (NRR) behavior for different user defined LOFA scenarios. It also provides analyses of a LOFA of a single fuel channel and its impact on operational transactions and on the behavior of the reactor. The generic LOFA scenario module includes graphs needed to clarify the effects of the LOFA under study. Furthermore, the percentage of the loss of mass flow rate, the mode of flow reduction and the start time and transient time of LOFA are user defined to add flexibility to the LOFA scenarios. The objective of integrating such generic LOFA module is to be able to deal with such incidents and avoid their significant effects. It is also useful in the development of expertise in this area and reducing the operator training and simulations costs. The results of the implemented generic LOFA module agree well with that of COBRA-IIIC code and the earlier guidebook for this series of transients.

  2. TAXONOMY AND GENETIC RELATIONSHIPS OF PANGASIIDAE, ASIAN CATFISHES, BASED ON MORPHOLOGICAL AND MOLECULAR ANALYSES

    Directory of Open Access Journals (Sweden)

    Rudhy Gustiano

    2007-12-01

    Full Text Available Pangasiids are economically important riverine catfishes generally residing in freshwater from the Indian subcontinent to the Indonesian Archipelago. The systematics of this family are still poorly known. Consequently, lack of such basic information impedes the understanding of the biology of the Pangasiids and the study of their aquaculture potential as well as improvement of seed production and growth performance. The objectives of the present study are to clarify phylogeny of this family based on a biometric analysis and molecular evidence using 12S ribosomal mtDNA on the total of 1070 specimens. The study revealed that 28 species are recognised as valid in Pangasiidae. Four genera are also recognized as Helicophagus Bleeker 1858, Pangasianodon Chevey 1930, Pteropangasius Fowler 1937, and Pangasius Valenciennes 1840 instead of two as reported by previous workers. The phylogenetic analysis demonstrated the recognised genera, and genetic relationships among taxa. Overall, trees from the different analyses show similar topologies and confirm the hypothesis derived from geological history, palaeontology, and similar models in other taxa of fishes from the same area. The oldest genus may already have existed when the Asian mainland was still connected to the islands in the southern part about 20 million years ago.

  3. Historical Weathering Based on Chemical Analyses of Two Spodosols in Southern Sweden

    International Nuclear Information System (INIS)

    Melkerud, Per-Arne; Bain, Derek C.; Olsson, Mats T.

    2003-01-01

    Chemical weathering losses were calculated for two conifer stands in relation to ongoing studies on liming effects and ash amendments on chemical status, soil solution chemistry and soil genesis. Weathering losses were based on elemental depletion trends in soil profiles since deglaciation and exposure to the weathering environment. Gradients in total geochemical composition were assumed to reflect alteration over time. Study sites were Horroed and Hassloev in southern Sweden. Both Horroed and Hassloev sites are located on sandy loamy Weichselian till at an altitude of 85 and 190 m a.s.l., respectively. Aliquots from volume determined samples from a number of soil levels were fused with lithium metaborate, dissolved in HNO 3 , and analysed by ICP - AES. Results indicated highest cumulative weathering losses at Hassloev. The weathering losses for the elements are in the following order:Si > Al > K > Na > Ca > MgTotal annual losses for Ca+Mg+K+Na, expressed in mmol c m -2 yr -1 , amounted to c. 28 and 58 at Horroed and Hassloev, respectively. Variations between study sites could not be explained by differences in bulk density, geochemistry or mineralogy. The accumulated weathering losses since deglaciation were larger in the uppermost 15 cm than in deeper B horizons for most elements studied

  4. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  5. 40 CFR 600.208-12 - Calculation of FTP-based and HFET-based fuel economy and carbon-related exhaust emission values...

    Science.gov (United States)

    2010-07-01

    ...-based fuel economy and carbon-related exhaust emission values for a model type. 600.208-12 Section 600... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later...-based and HFET-based fuel economy and carbon-related exhaust emission values for a model type. (a) Fuel...

  6. Life cycle energy use and GHG emission assessment of coal-based SNG and power cogeneration technology in China

    International Nuclear Information System (INIS)

    Li, Sheng; Gao, Lin; Jin, Hongguang

    2016-01-01

    Highlights: • Life cycle energy use and GHG emissions are assessed for SNG and power cogeneration. • A model based on a Chinese domestic database is developed for evaluation. • Cogeneration shows lower GHG emissions than coal-power pathway. • Cogeneration has lower life cycle energy use than supercritical coal-power pathway. • Cogeneration is a good option to implement China’s clean coal technologies. - Abstract: Life cycle energy use and GHG emissions are assessed for coal-based synthetic natural gas (SNG) and power cogeneration/polygenereation (PG) technology and its competitive alternatives. Four main SNG applications are considered, including electricity generation, steam production, SNG vehicle and battery electric vehicle (BEV). Analyses show that if SNG is produced from a single product plant, the lower limits of its life cycle energy use and GHG emissions can be comparable to the average levels of coal-power and coal-BEV pathways, but are still higher than supercritical and ultra supercritical (USC) coal-power and coal-BEV pathways. If SNG is coproduced from a PG plant, when it is used for power generation, steam production, and driving BEV car, the life cycle energy uses for PG based pathways are typically lower than supercritical coal-power pathways, but are still 1.6–2.4% higher than USC coal-power pathways, and the average life cycle GHG emissions are lower than those of all coal-power pathways including USC units. If SNG is used to drive vehicle car, the life cycle energy use and GHG emissions of PG-SNGV-power pathway are both much higher than all combined coal-BEV and coal-power pathways, due to much higher energy consumption in a SNG driven car than in a BEV car. The coal-based SNG and power cogeneration technology shows comparable or better energy and environmental performances when compared to other coal-based alternatives, and is a good option to implement China’s clean coal technologies.

  7. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  8. Analysing the operative experience of basic surgical trainees in Ireland using a web-based logbook

    LENUS (Irish Health Repository)

    Lonergan, Peter E

    2011-09-25

    Abstract Background There is concern about the adequacy of operative exposure in surgical training programmes, in the context of changing work practices. We aimed to quantify the operative exposure of all trainees on the National Basic Surgical Training (BST) programme in Ireland and compare the results with arbitrary training targets. Methods Retrospective analysis of data obtained from a web-based logbook (http:\\/\\/www.elogbook.org) for all general surgery and orthopaedic training posts between July 2007 and June 2009. Results 104 trainees recorded 23,918 operations between two 6-month general surgery posts. The most common general surgery operation performed was simple skin excision with trainees performing an average of 19.7 (± 9.9) over the 2-year training programme. Trainees most frequently assisted with cholecystectomy with an average of 16.0 (± 11.0) per trainee. Comparison of trainee operative experience to arbitrary training targets found that 2-38% of trainees achieved the targets for 9 emergency index operations and 24-90% of trainees achieved the targets for 8 index elective operations. 72 trainees also completed a 6-month post in orthopaedics and recorded 7,551 operations. The most common orthopaedic operation that trainees performed was removal of metal, with an average of 2.90 (± 3.27) per trainee. The most common orthopaedic operation that trainees assisted with was total hip replacement, with an average of 10.46 (± 6.21) per trainee. Conclusions A centralised web-based logbook provides valuable data to analyse training programme performance. Analysis of logbooks raises concerns about operative experience at junior trainee level. The provision of adequate operative exposure for trainees should be a key performance indicator for training programmes.

  9. [Research on fast classification based on LIBS technology and principle component analyses].

    Science.gov (United States)

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  10. Emissions from Road Vehicles Fuelled by Fischer Tropsch Based Diesel and Gasoline

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, U; Lundorf, P; Ivarsson, A; Schramm, J [Technical University of Denmark (Denmark); Rehnlund, B [Atrax Energi AB (Sweden); Blinge, M [The Swedish Transport Institute (Sweden)

    2006-11-15

    The described results were carried out under the umbrella of IEA Advanced Motor Fuels Agreement. The purpose was to evaluate the emissions of carbon monoxide (CO), unburned hydrocarbons (HC), nitrogen oxides (NOx), particulate matter (PM) and polycyclic aromatic hydrocarbons (PAH) from vehicles fuelled by Fischer Tropsch (FT) based diesel and gasoline fuel, compared to the emissions from ordinary diesel and gasoline. The comparison for diesel fuels was based on a literature review, whereas the gasoline comparison had to be based on our own experiments, since almost no references were found in this field. In this context measurement according to the Federal Test Procedure (FTP) and the New European Driving Cycle (NEDC) were carried out on a chassis dynamometer with a directly injected gasoline vehicle. Experiments were carried out with a reference fuel, a fuel based 70% on FT and an alkylate fuel (Aspen), which was supposed to be very similar, in many ways, to FT fuel. FT based diesel generally showed good emission performance, whereas the FT based gasoline not necessary lead to lower emissions. On the other hand, the Aspen fuel did show many advantages for the emissions from the gasoline vehicle.

  11. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  12. [Multispectral Radiation Algorithm Based on Emissivity Model Constraints for True Temperature Measurement].

    Science.gov (United States)

    Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng

    2015-10-01

    Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.

  13. [Dynamic road vehicle emission inventory simulation study based on real time traffic information].

    Science.gov (United States)

    Huang, Cheng; Liu, Juan; Chen, Chang-Hong; Zhang, Jian; Liu, Deng-Guo; Zhu, Jing-Yu; Huang, Wei-Ming; Chao, Yuan

    2012-11-01

    The vehicle activity survey, including traffic flow distribution, driving condition, and vehicle technologies, were conducted in Shanghai. The databases of vehicle flow, VSP distribution and vehicle categories were established according to the surveyed data. Based on this, a dynamic vehicle emission inventory simulation method was designed by using the real time traffic information data, such as traffic flow and average speed. Some roads in Shanghai city were selected to conduct the hourly vehicle emission simulation as a case study. The survey results show that light duty passenger car and taxi are major vehicles on the roads of Shanghai city, accounting for 48% - 72% and 15% - 43% of the total flow in each hour, respectively. VSP distribution has a good relationship with the average speed. The peak of VSP distribution tends to move to high load section and become lower with the increase of average speed. Vehicles achieved Euro 2 and Euro 3 standards are majorities of current vehicle population in Shanghai. Based on the calibration of vehicle travel mileage data, the proportions of Euro 2 and Euro 3 standard vehicles take up 11% - 70% and 17% - 51% in the real-world situation, respectively. The emission simulation results indicate that the ratios of emission peak and valley for the pollutants of CO, VOC, NO(x) and PM are 3.7, 4.6, 9.6 and 19.8, respectively. CO and VOC emissions mainly come from light-duty passenger car and taxi, which has a good relationship with the traffic flow. NO(x) and PM emissions are mainly from heavy-duty bus and public buses and mainly concentrate in the morning and evening peak hours. The established dynamic vehicle emission simulation method can reflect the change of actual road emission and output high emission road sectors and hours in real time. The method can provide an important technical means and decision-making basis for transportation environment management.

  14. Deriving fuel-based emission factor thresholds to interpret heavy-duty vehicle roadside plume measurements.

    Science.gov (United States)

    Quiros, David C; Smith, Jeremy D; Ham, Walter A; Robertson, William H; Huai, Tao; Ayala, Alberto; Hu, Shaohua

    2018-04-13

    Remote sensing devices have been used for decades to measure gaseous emissions from individual vehicles at the roadside. Systems have also been developed that entrain diluted exhaust and can also measure particulate matter (PM) emissions. In 2015, the California Air Resources Board (CARB) reported that 8% of in-field diesel particulate filters (DPF) on heavy-duty (HD) vehicles were malfunctioning and emitted about 70% of total diesel PM emissions from the DPF-equipped fleet. A new high-emitter problem in the heavy-duty vehicle fleet had emerged. Roadside exhaust plume measurements reflect a snapshot of real-world operation, typically lasting several seconds. In order to relate roadside plume measurements to laboratory emission tests, we analyzed carbon dioxide (CO 2 ), oxides of nitrogen (NO X ), and PM emissions collected from four HD vehicles during several driving cycles on a chassis dynamometer. We examined the fuel-based emission factors corresponding to possible exceedances of emission standards as a function of vehicle power. Our analysis suggests that a typical HD vehicle will exceed the model year (MY) 2010 emission standards (of 0.2 g NO X /bhp-hr and 0.01 g PM/bhp-hr) by three times when fuel-based emission factors are 9.3 g NO X /kg fuel and 0.11 g PM/kg using the roadside plume measurement approach. Reported limits correspond to 99% confidence levels, which were calculated using the detection uncertainty of emissions analyzers, accuracy of vehicle power calculations, and actual emissions variability of fixed operational parameters. The PM threshold was determined for acceleration events between 0.47 and 1.4 mph/sec only, and the NO X threshold was derived from measurements where aftertreatment temperature was above 200°C. Anticipating a growing interest in real-world driving emissions, widespread implementation of roadside exhaust plume measurements as a compliment to in-use vehicle programs may benefit from expanding this analysis to a larger

  15. Airway management education: simulation based training versus non-simulation based training-A systematic review and meta-analyses.

    Science.gov (United States)

    Sun, Yanxia; Pan, Chuxiong; Li, Tianzuo; Gan, Tong J

    2017-02-01

    Simulation-based training (SBT) has become a standard for medical education. However, the efficacy of simulation based training in airway management education remains unclear. The aim of this study was to evaluate all published evidence comparing the effectiveness of SBT for airway management versus non-simulation based training (NSBT) on learner and patient outcomes. Systematic review with meta-analyses were used. Data were derived from PubMed, EMBASE, CINAHL, Scopus, the Cochrane Controlled Trials Register and Cochrane Database of Systematic Reviews from inception to May 2016. Published comparative trials that evaluated the effect of SBT on airway management training in compared with NSBT were considered. The effect sizes with 95% confidence intervals (CI) were calculated for outcomes measures. Seventeen eligible studies were included. SBT was associated with improved behavior performance [standardized mean difference (SMD):0.30, 95% CI: 0.06 to 0.54] in comparison with NSBT. However, the benefits of SBT were not seen in time-skill (SMD:-0.13, 95% CI: -0.82 to 0.52), written examination score (SMD: 0.39, 95% CI: -0.09 to 0.86) and success rate of procedure completion on patients [relative risk (RR): 1.26, 95% CI: 0.96 to 1.66]. SBT may be not superior to NSBT on airway management training.

  16. Comparison based on energy and exergy analyses of the potential cogeneration efficiencies for fuel cells and other electricity generation devices

    Energy Technology Data Exchange (ETDEWEB)

    Rosen, M A [Ryerson Polytechnical Inst., Toronto, (CA). Dept. of Mechanical Engineering

    1990-01-01

    Comparisons of the potential cogeneration efficiencies are made, based on energy and exergy analyses, for several devices for electricity generation. The investigation considers several types of fuel cell system (Phosphoric Acid, Alkaline, Solid Polymer Electrolyte, Molten Carbonate and Solid Oxide), and several fossil-fuel and nuclear cogeneration systems based on steam power plants. In the analysis, each system is modelled as a device for which fuel and air enter, and electrical- and thermal-energy products and material and thermal-energy wastes exit. The results for all systems considered indicate that exergy analyses should be used when analysing the cogeneration potential of systems for electricity generation, because they weigh the usefulnesses of heat and electricity on equivalent bases. Energy analyses tend to present overly optimistic views of performance. These findings are particularly significant when large fractions of the heat output from a system are utilized for cogeneration. (author).

  17. Teleseism-based Relative Time Corrections for Modern Analyses of Digitized Analog Seismograms

    Science.gov (United States)

    Lee, T. A.; Ishii, M.

    2017-12-01

    With modern-day instruments and seismic networks timed by GPS systems, synchronization of data streams is all but a forgone conclusion. However, during the analog era, when each station had its own clock, comparing data timing from different stations was a far more daunting prospect. Today, with recently developed methods by which analog data can be digitized, having the ability to accurately reconcile the timings of two separate stations would open decades worth of data to modern analyses. For example, one possible and exciting application would be using noise interferometry with digitized analog data in order to investigate changing structural features (on a volcano for example) over a much longer timescale than was previously possible. With this in mind, we introduce a new approach to sync time between stations based on teleseismic arrivals. P-wave arrivals are identified at stations for pairs of earthquakes from the digital and analog eras that have nearly identical distances, locations, and depths. Assuming accurate timing of the modern data, relative time corrections between a pair of stations can then be inferred for the analog data. This method for time correction depends upon the analog stations having modern equivalents, and both having sufficiently long durations of operation to allow for recording of usable teleseismic events. The Hawaii Volcano Observatory (HVO) network is an especially ideal environment for this, as it not only has a large and well-preserved collection of analog seismograms, but also has a long operating history (1912 - present) with many of the older stations having modern equivalents. As such, the scope of this project is to calculate and apply relative time corrections to analog data from two HVO stations, HILB (1919-present) and UWE (1928-present)(HILB now part of Pacific Tsunami network). Further application of this method could be for investigation of the effects of relative clock-drift, that is, the determining factor for how

  18. Molecular Characterization of Five Potyviruses Infecting Korean Sweet Potatoes Based on Analyses of Complete Genome Sequences

    Directory of Open Access Journals (Sweden)

    Hae-Ryun Kwak

    2015-12-01

    Full Text Available Sweet potatoes (Ipomea batatas L. are grown extensively, in tropical and temperate regions, and are important food crops worldwide. In Korea, potyviruses, including Sweet potato feathery mottle virus (SPFMV, Sweet potato virus C (SPVC, Sweet potato virus G (SPVG, Sweet potato virus 2 (SPV2, and Sweet potato latent virus (SPLV, have been detected in sweet potato fields at a high (~95% incidence. In the present work, complete genome sequences of 18 isolates, representing the five potyviruses mentioned above, were compared with previously reported genome sequences. The complete genomes consisted of 10,081 to 10,830 nucleotides, excluding the poly-A tails. Their genomic organizations were typical of the Potyvirus genus, including one target open reading frame coding for a putative polyprotein. Based on phylogenetic analyses and sequence comparisons, the Korean SPFMV isolates belonged to the strains RC and O with >98% nucleotide sequence identity. Korean SPVC isolates had 99% identity to the Japanese isolate SPVC-Bungo and 70% identity to the SPFMV isolates. The Korean SPVG isolates showed 99% identity to the three previously reported SPVG isolates. Korean SPV2 isolates had 97% identity to the SPV2 GWB-2 isolate from the USA. Korean SPLV isolates had a relatively low (88% nucleotide sequence identity with the Taiwanese SPLV-TW isolates, and they were phylogenetically distantly related to SPFMV isolates. Recombination analysis revealed that possible recombination events occurred in the P1, HC-Pro and NIa-NIb regions of SPFMV and SPLV isolates and these regions were identified as hotspots for recombination in the sweet potato potyviruses.

  19. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  20. Modelling and optimization of combined cycle power plant based on exergoeconomic and environmental analyses

    International Nuclear Information System (INIS)

    Ganjehkaviri, A.; Mohd Jaafar, M.N.; Ahmadi, P.; Barzegaravval, H.

    2014-01-01

    This research paper presents a study on a comprehensive thermodynamic modelling of a combined cycle power plant (CCPP). The effects of economic strategies and design parameters on the plant optimization are also studied. Exergoeconomic analysis is conducted in order to determine the cost of electricity and cost of exergy destruction. In addition, a comprehensive optimization study is performed to determine the optimal design parameters of the power plant. Next, the effects of economic parameters variations on the sustainability, carbon dioxide emission and fuel consumption of the plant are investigated and are presented for a typical combined cycle power plant. Therefore, the changes in economic parameters caused the balance between cash flows and fix costs of the plant changes at optimum point. Moreover, economic strategies greatly limited the maximum reasonable carbon emission and fuel consumption reduction. The results showed that by using the optimum values, the exergy efficiency increases for about 6%, while CO 2 emission decreases by 5.63%. However, the variation in the cost was less than 1% due to the fact that a cost constraint was implemented. In addition, the sensitivity analysis for the optimization study was curtailed to be carried out; therefore, the optimization process and results to two important parameters are presented and discussed.

  1. Signal frequency distribution and natural-time analyses from acoustic emission monitoring of an arched structure in the Castle of Racconigi

    Directory of Open Access Journals (Sweden)

    G. Niccolini

    2017-07-01

    Full Text Available The stability of an arch as a structural element in the thermal bath of King Charles Albert (Carlo Alberto in the Royal Castle of Racconigi (on the UNESCO World Heritage List since 1997 was assessed by the acoustic emission (AE monitoring technique with application of classical inversion methods to recorded AE data. First, damage source location by means of triangulation techniques and signal frequency analysis were carried out. Then, the recently introduced method of natural-time analysis was preliminarily applied to the AE time series in order to reveal a possible entrance point to a critical state of the monitored structural element. Finally, possible influence of the local seismic and microseismic activity on the stability of the monitored structure was investigated. The criterion for selecting relevant earthquakes was based on the estimation of the size of earthquake preparation zones. The presented results suggest the use of the AE technique as a tool for detecting both ongoing structural damage processes and microseismic activity during preparation stages of seismic events.

  2. Signal frequency distribution and natural-time analyses from acoustic emission monitoring of an arched structure in the Castle of Racconigi

    Science.gov (United States)

    Niccolini, Gianni; Manuello, Amedeo; Marchis, Elena; Carpinteri, Alberto

    2017-07-01

    The stability of an arch as a structural element in the thermal bath of King Charles Albert (Carlo Alberto) in the Royal Castle of Racconigi (on the UNESCO World Heritage List since 1997) was assessed by the acoustic emission (AE) monitoring technique with application of classical inversion methods to recorded AE data. First, damage source location by means of triangulation techniques and signal frequency analysis were carried out. Then, the recently introduced method of natural-time analysis was preliminarily applied to the AE time series in order to reveal a possible entrance point to a critical state of the monitored structural element. Finally, possible influence of the local seismic and microseismic activity on the stability of the monitored structure was investigated. The criterion for selecting relevant earthquakes was based on the estimation of the size of earthquake preparation zones. The presented results suggest the use of the AE technique as a tool for detecting both ongoing structural damage processes and microseismic activity during preparation stages of seismic events.

  3. Management accounting approach to analyse energy related CO2 emission: A variance analysis study of top 10 emitters of the world

    International Nuclear Information System (INIS)

    Pani, Ratnakar; Mukhopadhyay, Ujjaini

    2013-01-01

    The paper undertakes a decomposition study of carbon dioxide emission of the top ten emitting countries over the period 1980–2007 using variance analysis method, with the objectives of examining the relative importance of the major determining factors, the role of energy structure and impact of liberalisation on emission and exploring the possibilities of arresting emission with simultaneous rise in population and income. The major findings indicate that although rising income and population are the main driving forces, they are neither necessary nor sufficient for increasing emission, rather energy structure and emission intensities are the crucial determinants, pointing towards the fact that a country with higher income and population with proper energy policy may be a low emitter and vice-versa. Since modern energy-intensive production limits the scope of reduction in total energy use, it is necessary to decouple the quantum of energy use from emission through technological upgradation. The results indicate that liberalisation resulted in higher emission. The paper attempts to illustrate the required adjustments in energy structure and suggests necessary policy prescriptions.

  4. [Study on Ammonia Emission Rules in a Dairy Feedlot Based on Laser Spectroscopy Detection Method].

    Science.gov (United States)

    He, Ying; Zhang, Yu-jun; You, Kun; Wang, Li-ming; Gao, Yan-wei; Xu, Jin-feng; Gao, Zhi-ling; Ma, Wen-qi

    2016-03-01

    It needs on-line monitoring of ammonia concentration on dairy feedlot to disclose ammonia emissions characteristics accurately for reducing ammonia emissions and improving the ecological environment. The on-line monitoring system for ammonia concentration has been designed based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) technology combining with long open-path technology, then the study has been carried out with inverse dispersion technique and the system. The ammonia concentration in-situ has been detected and ammonia emission rules have been analyzed on a dairy feedlot in Baoding in autumn and winter of 2013. The monitoring indicated that the peak of ammonia concentration was 6.11 x 10(-6) in autumn, and that was 6.56 x 10(-6) in winter. The concentration results show that the variation of ammonia concentration had an obvious diurnal periodicity, and the general characteristic of diurnal variation was that the concentration was low in the daytime and was high at night. The ammonia emissions characteristic was obtained with inverse dispersion model that the peak of ammonia emissions velocity appeared at noon. The emission velocity was from 1.48 kg/head/hr to 130.6 kg/head/hr in autumn, and it was from 0.004 5 kg/head/hr to 43.32 kg/head/hr in winter which was lower than that in autumn. The results demonstrated ammonia emissions had certain seasonal differences in dairy feedlot scale. In conclusion, the ammonia concentration was detected with optical technology, and the ammonia emissions results were acquired by inverse dispersion model analysis with large range, high sensitivity, quick response without gas sampling. Thus, it's an effective method for ammonia emissions monitoring in dairy feedlot that provides technical support for scientific breeding.

  5. A Study on Vehicle Emission Factor Correction Based on Fuel Consumption Measurement

    Science.gov (United States)

    Wang, Xiaoning; Li, Meng; Peng, Bo

    2018-01-01

    The objective of this study is to address the problem of obvious differences between the calculated and measured emissions of pollutants from motor vehicle by using the existing "Environmental Impact Assessment Specification of Highway Construction Projects". First, a field study collects the vehicle composition ratio, speed, slope, fuel consumption and other essential data. Considering practical applications, the emission factors corresponding to 40km/h and 110km/h and 120km/h velocity are introduced by data fitting. Then, the emission factors of motor vehicle are revised based on the measured fuel consumption, and the pollutant emission modified formula was calculated and compared with the standard recommendation formula. The results show the error between calculated and measured values are within 5%, which can better reflect the actual discharge of the motor vehicle.

  6. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  7. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    Science.gov (United States)

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  8. Nitrous Oxide (N2O) Emissions from California based on 2010 CalNex Airborne Measurements

    Science.gov (United States)

    Xiang, B.; Miller, S.; Kort, E. A.; Santoni, G. W.; Daube, B.; Commane, R.; Angevine, W. M.; Ryerson, T. B.; Trainer, M.; Andrews, A. E.; Nehrkorn, T.; Tian, H.; Wofsy, S. C.

    2012-12-01

    Nitrous oxide (N2O) is an important gas for climate and for stratospheric chemistry, with an atmospheric lifetime exceeding 100 years. Global concentrations have increased steadily since the 18th century, apparently due to human-associated emissions, principally from application of nitrogen fertilizers. However, quantitative studies of agricultural emissions at large spatial scales are lacking, inhibited by the difficulty of measuring small enhancements of atmospheric concentrations. Here we derive regional emission rates for N2O in the Central Valley of California, based on analysis of in-situ airborne atmospheric observations collected using a quantum cascade laser spectrometer. The data were obtained on board the NOAA P-3 research aircraft during the CalNex (California Research at the Nexus of Air Quality and Climate Change) program in May and June, 2010. We coupled WRF (Weather Research and Forecasting) model to STILT (Stochastic Time-Inverted Lagrangian Transport) to link our in-situ observations to surface emissions, and then used a variety of statistical methods to identify source areas and to extract optimized emission rates from the inversion. Our results support the view that fertilizer application is the largest source of N2O in the Central Valley. But the spatial distribution of derived surface emissions, based on California land use and activity maps, was very different than indicated in the leading emissions inventory (EDGAR 4.0), and our estimated total emission flux of N2O for California during the study period was 3 - 4 times larger than EDGAR and other inventories.

  9. Maximum Regional Emission Reduction Potential in Residential Sector Based on Spatial Distribution of Population and Resources

    Science.gov (United States)

    Winijkul, E.; Bond, T. C.

    2011-12-01

    In the residential sector, major activities that generate emissions are cooking and heating, and fuels ranging from traditional (wood) to modern (natural gas, or electricity) are used. Direct air pollutant emissions from this sector are low when natural gas or electricity are the dominant energy sources, as is the case in developed countries. However, in developing countries, people may rely on solid fuels and this sector can contribute a large fraction of emissions. The magnitude of the health loss associated with exposure to indoor smoke as well as its concentration among rural population in developing countries have recently put preventive measures high on the agenda of international development and public health organizations. This study focuses on these developing regions: Central America, Africa, and Asia. Current and future emissions from the residential sector depend on both fuel and cooking device (stove) type. Availability of fuels, stoves, and interventions depends strongly on spatial distribution. However, regional emission calculations do not consider this spatial dependence. Fuel consumption data is presented at country level, without information about where different types of fuel are used. Moreover, information about stove types that are currently used and can be used in the future is not available. In this study, we first spatially allocate current emissions within residential sector. We use Geographic Information System maps of temperature, electricity availability, forest area, and population to determine the distribution of fuel types and availability of stoves. Within each country, consumption of different fuel types, such as fuelwood, coal, and LPG is distributed among different area types (urban, peri-urban, and rural area). Then, the cleanest stove technologies which could be used in the area are selected based on the constraints of each area, i.e. availability of resources. Using this map, the maximum emission reduction compared with

  10. Inventory of atmospheric pollutant and greenhouse gas emissions in France. Sectoral series and extended analyses - SECTEN Format, April 2011; Inventaire des emissions de polluants atmospheriques et de gaz a effet de serre en France. Series sectorielles et analyses etendues - Format SECTEN, avril 2011

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jean-Pierre; Fontelle, Jean-Pierre; Serveau, Laetitia; Allemand, Nadine; Jeannot, Coralie; Andre, Jean-Marc; Joya, Romain; Deflorenne, Emmanuel; Martinet, Yann; Druart, Ariane; Mathias, Etienne; Gavel, Antoine; Nicco, Laetitia; Gueguen, Celine; Prouteau, Emilie; Jabot, Julien; Tuddenham, Mark; Jacquier, Guillaume; Vincent, Julien

    2011-04-15

    rather favorable as far as the estimated level is below than observed in 2009. Regarding the greenhouse gases, the trend is rather directed in a light increase (2, 2 % between 2009 and 2010 for the CO{sub 2} and 1, 4 % in terms of PRG) because of the year 2009 is very marked by a context of economic crisis and a resumption begun in 2010. The preliminary estimations for year 2010 should therefore be considered with caution because they need to be consolidated. The results are presented at national level for each of the main sectors defined in the SECTEN format. A more detailed breakdown of each main sector is provided for the period 1990-2009. Results also focus on the different energy products and several analyses provide additional information on NMVOCs, PAHs HFCs, PFCs, global warming potential and particular sources, such as transport and off-road mobile sources (generators, machinery and vehicles used in construction, industry, agriculture and forestry, as well as household and gardening machinery). The report contains indications regarding the targets to which France has committed itself under international conventions and EU directives, in particular for climate change and for transboundary air pollution and air quality. These results show that on the whole emission trends observed are encouraging and especially reflect the reduction actions implemented. The table below summarises total emissions over the period 1990-2010 for all the above mentioned substances, as well as indicators concerning acidification and the greenhouse effect

  11. Temporal variation of VOC emission from solvent and water based wood stains

    Science.gov (United States)

    de Gennaro, Gianluigi; Loiotile, Annamaria Demarinis; Fracchiolla, Roberta; Palmisani, Jolanda; Saracino, Maria Rosaria; Tutino, Maria

    2015-08-01

    Solvent- and water-based wood stains were monitored using a small test emission chamber in order to characterize their emission profiles in terms of Total and individual VOCs. The study of concentration-time profiles of individual VOCs enabled to identify the compounds emitted at higher concentration for each type of stain, to examine their decay curve and finally to estimate the concentration in a reference room. The solvent-based wood stain was characterized by the highest Total VOCs emission level (5.7 mg/m3) that decreased over time more slowly than those related to water-based ones. The same finding was observed for the main detected compounds: Benzene, Toluene, Ethylbenzene, Xylenes, Styrene, alpha-Pinene and Camphene. On the other hand, the highest level of Limonene was emitted by a water-based wood stain. However, the concentration-time profile showed that water-based product was characterized by a remarkable reduction of the time of maximum and minimum emission: Limonene concentration reached the minimum concentration in about half the time compared to the solvent-based product. According to AgBB evaluation scheme, only one of the investigated water-based wood stains can be classified as a low-emitting product whose use may not determine any potential adverse effect on human health.

  12. Simultaneous acquisition of dual analyser-based phase contrast X-ray images for small animal imaging

    International Nuclear Information System (INIS)

    Kitchen, Marcus J.; Pavlov, Konstantin M.; Hooper, Stuart B.; Vine, David J.; Siu, Karen K.W.; Wallace, Megan J.; Siew, Melissa L.L.; Yagi, Naoto; Uesugi, Kentaro; Lewis, Rob A.

    2008-01-01

    Analyser-based phase contrast X-ray imaging can provide high-contrast images of biological tissues with exquisite sensitivity to the boundaries between tissues. The phase and absorption information can be extracted by processing multiple images acquired at different analyser orientations. Recording both the transmitted and diffracted beams from a thin Laue analyser crystal can make phase retrieval possible for dynamic systems by allowing full field imaging. This technique was used to image the thorax of a mechanically ventilated newborn rabbit pup using a 25 keV beam from the SPring-8 synchrotron radiation facility. The diffracted image was produced from the (1 1 1) planes of a 50 mm x 40 mm, 100 μm thick Si analyser crystal in the Laue geometry. The beam and analyser were large enough to image the entire chest, making it possible to observe changes in anatomy with high contrast and spatial resolution

  13. Simultaneous acquisition of dual analyser-based phase contrast X-ray images for small animal imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kitchen, Marcus J. [School of Physics, Monash University, Victoria 3800 (Australia)], E-mail: Marcus.Kitchen@sci.monash.edu.au; Pavlov, Konstantin M. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia); Physics and Electronics, School of Science and Technology, University of New England, NSW 2351 (Australia)], E-mail: Konstantin.Pavlov@sci.monash.edu.au; Hooper, Stuart B. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Stuart.Hooper@med.monash.edu.au; Vine, David J. [School of Physics, Monash University, Victoria 3800 (Australia)], E-mail: David.Vine@sci.monash.edu.au; Siu, Karen K.W. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia)], E-mail: Karen.Siu@sci.monash.edu.au; Wallace, Megan J. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Megan.Wallace@med.monash.edu.au; Siew, Melissa L.L. [Department of Physiology, Monash University, Victoria 3800 (Australia)], E-mail: Melissa.Siew@med.monash.edu.au; Yagi, Naoto [SPring-8/JASRI, Sayo (Japan)], E-mail: yagi@spring8.or.jp; Uesugi, Kentaro [SPring-8/JASRI, Sayo (Japan)], E-mail: ueken@spring8.or.jp; Lewis, Rob A. [School of Physics, Monash University, Victoria 3800 (Australia); Monash Centre for Synchrotron Science, Monash University, Victoria 3800 (Australia)], E-mail: Rob.Lewis@sync.monash.edu.au

    2008-12-15

    Analyser-based phase contrast X-ray imaging can provide high-contrast images of biological tissues with exquisite sensitivity to the boundaries between tissues. The phase and absorption information can be extracted by processing multiple images acquired at different analyser orientations. Recording both the transmitted and diffracted beams from a thin Laue analyser crystal can make phase retrieval possible for dynamic systems by allowing full field imaging. This technique was used to image the thorax of a mechanically ventilated newborn rabbit pup using a 25 keV beam from the SPring-8 synchrotron radiation facility. The diffracted image was produced from the (1 1 1) planes of a 50 mm x 40 mm, 100 {mu}m thick Si analyser crystal in the Laue geometry. The beam and analyser were large enough to image the entire chest, making it possible to observe changes in anatomy with high contrast and spatial resolution.

  14. Monitoring of CO{sub 2}-emissions in refineries - Analysis of existing systems; Erfassung von CO{sub 2}-Emissionen in Raffinerien - Analyse vorliegender Systeme

    Energy Technology Data Exchange (ETDEWEB)

    Trautwein, W.P.

    2003-09-01

    This report describes six different methods of monitoring and reporting of CO{sub 2}-emissions of refineries and compares these with regard to their suitability for emissions trading. As a result of this study two systems appear to be most suited: 1. ''Study on the monitoring and measurement of greenhouse gas emissions at the plant level in the context of the Kyoto mechanisms'', Center of Clean Air Policy, USA 2. ''The greenhouse gas protocol - a corporate accounting and reporting standard'', World Business Council for Sustainable Development, Switzerland. Essentially the result of this DGMK-Research Report is in agreement with a draft guideline of the EU for emissions trading, which, however, is much more detailed and comprehensive. (orig.)

  15. [Spatial temporal differentiation of product-based and consumption-based CO2 emissions and balance in the Beijing-Tianjin-Hebei region: an economic input- output analysis].

    Science.gov (United States)

    Wang, Hao; Chen, Cao-cao; Pan, Tao; Liu, Chun-lan; Chen, Long; Sun, Li

    2014-09-01

    Distinguishing product-based and consumption-based CO2 emissions in the open economic region is the basis for differentiating the emission responsibility, which is attracting increasing attention of decision-makers'attention. The spatial and temporal characteristics of product-based and consumption-based CO2 emissions, as well as carbon balance, in 1997, 2002 and 2007 of JING- JIN-JI region were analyzed by the Economic Input-Output-Life Cycle Assessment model. The results revealed that both the product- based and consumption-based CO2 emissions in the region have been increased by about 4% annually. The percentage of CO2 emissions embodied in trade was 30% -83% , to which the domestic trading added the most. The territorial and consumption-based CO2 emissions in Hebei province were the predominant emission in JING-JIN-JI region, and the increasing speed and emission intensity were stronger than those of Beijing and Tianjin. JING-JIN-JI region was a net inflow region of CO2 emissions, and parts of the emission responsibility were transferred. Beijing and Tianjin were the net importers of CO2 emissions, and Hebei was a net outflow area of CO2 emissions. The key CO2 emission departments in the region were concentrated, and the similarity was great. The inter-regional mechanisms could be set up for joint prevention and control work. - Production and distribution of electricity, gas and water and smelting and pressing of metals had the highest reliability on CO2 emissions, and took on the responsibility of other departments. The EIO-LCA model could be used to analyze the product-based and consumption-based CO2 emissions, which is helpful for the delicate management of regional CO2 emissions reduction and policies making, and stimulating the reduction cooperation at regional scale.

  16. Big and Little Feet: A Comparison of Provincial Level Consumption- and Production-Based Emissions Footprints

    Directory of Open Access Journals (Sweden)

    Sarah Dobson

    2017-09-01

    Full Text Available A comprehensive national climate policy needs to provide both producers and consumers with incentives for reducing greenhouse gas emissions. Too often, policy discussions focus on emissions reduction among producers. This limited perspective fails to take into account the complex relationship between emissions production in one region and consumption demands in another. All economic production requires both a producer and a consumer. If no consumer for a good or service exists, then that good or service will not be produced. We understand the producer’s role in generating Canada’s greenhouse gas emissions, but often forget the consumer’s role. In this paper, we explore both the conventional production-based emissions accounting as well as consumption-based accounting, wherein all of the emissions generated in order to produce a final consumption good are allocated to consumers of those goods. Production and consumption are not a simple case of cause and effect. Rather, production emissions diverge strongly across Canadian provinces while consumption emissions tend to be similar. Significant interprovincial and international trade flows in emissions enable this pattern. Recognition of these trade flows provides important insights for the development of Canada’s national climate change strategy. Interprovincial trade flows provide a strong argument in support of Canada’s forthcoming national carbon price. By ensuring the large majority of emissions in Canada are similarly priced – regardless of where they are produced – it minimizes the risk of interprovincial carbon leakage (where companies avoid the carbon price by relocating to a jurisdiction with weaker climate measures and increases the likelihood that Canadian consumers will face an incentive to adjust their demand of domestically produced carbon intensive goods. Implementation of a national carbon price must make allowances for production sectors with significant international

  17. Time resolution in scintillator based detectors for positron emission tomography

    International Nuclear Information System (INIS)

    Gundacker, S.

    2014-01-01

    In the domain of medical photon detectors L(Y)SO scintillators are used for positron emission tomography (PET). The interest for time of flight (TOF) in PET is increasing since measurements have shown that new crystals like L(Y)SO coupled to state of the art photodetectors, e.g. silicon photomultipliers (SiPM), can reach coincidence time resolutions (CTRs) of far below 500ps FWHM. To achieve these goals it is important to study the processe in the whole detection chain, i.e. the high energy particle or gamma interaction in the crystal, the scintillation process itself, the light propagation in the crystal with the light transfer to the photodetector, and the electronic readout. In this thesis time resolution measurements for a PET like system are performed in a coincidence setup utilizing the ultra fast amplifier discriminator NINO. We found that the time-over-threshold energy information provided by NINO shows a degradation in energy resolution for higher SiPM bias voltages. This is a consequence of the increasing dark count rate (DCR) of the SiPM with higher bias voltages together with the exponential decay of the signal. To overcome this problem and to operate the SiPM at its optimum voltage in terms of timing we developed a new electronic board that employs NINO only as a low noise leading edge discriminator together with an analog amplifier which delivers the energy information. With this new electronic board we indeed improved the measured CTR by about 15%. To study the limits of time resolution in more depth we measured the CTR with 2x2x3mm3 LSO:Ce codoped 0.4%Ca crystals coupled to commercially available SiPMs (Hamamatsu S10931-50P MPPC) and achieved a CTR of 108±5ps FWHM at an energy of 511keV. We determined the influence of the data acquisition system and the electronics on the CTR to be 27±2ps FWHM and thus negligible. To quantitatively understand the measured values, we developed a Monte Carlo simulation tool in MATLAB that incorporates the timing

  18. SeeSway - A free web-based system for analysing and exploring standing balance data.

    Science.gov (United States)

    Clark, Ross A; Pua, Yong-Hao

    2018-06-01

    Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate

  19. Laser-Based Diagnostic Measurements of Low Emissions Combustor Concepts

    Science.gov (United States)

    Hicks, Yolanda R.

    2011-01-01

    This presentation provides a summary of primarily laser-based measurement techniques we use at NASA Glenn Research Center to characterize fuel injection, fuel/air mixing, and combustion. The report highlights using Planar Laser-Induced Fluorescence, Particle Image Velocimetry, and Phase Doppler Interferometry to obtain fuel injector patternation, fuel and air velocities, and fuel drop sizes and turbulence intensities during combustion. We also present a brief comparison between combustors burning standard JP-8 Jet fuel and an alternative fuels. For this comparison, we used flame chemiluminescence and high speed imaging.

  20. Analysis of acoustic emission signals of fatigue crack growth and corrosion processes. Investigation of the possibilities for continuous condition monitoring of transport containers by acoustic emission testing; Analyse der Schallemissionssignale aus Ermuedungsrisswachstum und Korrosionsprozessen. Untersuchung der Moeglichkeiten fuer die kontinuierliche Zustandsueberwachung von Transportbehaeltern mittels Schallemissionspruefung

    Energy Technology Data Exchange (ETDEWEB)

    Wachsmuth, Janne

    2016-05-01

    Fatigue crack growth and active corrosion processes are the main causes of structural failures of transport products like road tankers, railway tank cars and ships. To prevent those failures, preventive, time-based maintenance is performed. However, preventive inspections are costly and include the risk of not detecting a defect, which could lead to a failure within the next service period. An alternative is the idea of continuous monitoring of the whole structure by means of acoustic emission testing (AT). With AT, defects within the material shall be detected and repaired directly after their appearance. Acoustic emission testing is an online non-destructive testing method. Acoustic emission (AE) arises from changes within the material and is transported by elastic waves through the material. If the AE event generates enough energy, the elastic wave propagates to the boundaries of the component, produces a displacement in the picometre scale and can be detected by a piezoelectric sensor. The sensor produces an electrical signal. From this AE signal, AE features such as the maximum amplitude or the frequency can be extracted. Methods of signal analysis are used to investigate the time and frequency dependency of signal groups. The purpose of the signal analysis is to connect the AE signal with the originating AE source. If predefined damage mechanisms are identified, referencing the damage condition of the structure is possible. Acoustic emission from events of the actual crack propagation process can for example lead to the crack growth rate or the stress intensity factor, both specific values from fracture mechanics. A new development in the domain of acoustic emission testing is the pattern recognition of AE signals. Specific features are extracted from the AE signals to assign them to their damage mechanisms. In this thesis the AE signals from the damage mechanisms corrosion and fatigue crack growth are compared and analysed. The damage mechanisms were

  1. New insights into survival trend analyses in cancer population-based studies: the SUDCAN methodology.

    Science.gov (United States)

    Uhry, Zoé; Bossard, Nadine; Remontet, Laurent; Iwaz, Jean; Roche, Laurent

    2017-01-01

    The main objective of the SUDCAN study was to compare, for 15 cancer sites, the trends in net survival and excess mortality rates from cancer 5 years after diagnosis between six European Latin countries (Belgium, France, Italy, Portugal, Spain and Switzerland). The data were extracted from the EUROCARE-5 database. The study period ranged from 6 (Portugal, 2000-2005) to 18 years (Switzerland, 1989-2007). Trend analyses were carried out separately for each country and cancer site; the number of cases ranged from 1500 to 104 000 cases. We developed an original flexible excess rate modelling strategy that accounts for the continuous effects of age, year of diagnosis, time since diagnosis and their interactions. Nineteen models were constructed; they differed in the modelling of the effect of the year of diagnosis in terms of linearity, proportionality and interaction with age. The final model was chosen according to the Akaike Information Criterion. The fit was assessed graphically by comparing model estimates versus nonparametric (Pohar-Perme) net survival estimates. Out of the 90 analyses carried out, the effect of the year of diagnosis on the excess mortality rate depended on age in 61 and was nonproportional in 64; it was nonlinear in 27 out of the 75 analyses where this effect was considered. The model fit was overall satisfactory. We analysed successfully 15 cancer sites in six countries. The refined methodology proved necessary for detailed trend analyses. It is hoped that three-dimensional parametric modelling will be used more widely in net survival trend studies as it has major advantages over stratified analyses.

  2. Research Based on the Acoustic Emission of Wind Power Tower Drum Dynamic Monitoring Technology

    Science.gov (United States)

    Zhang, Penglin; Sang, Yuan; Xu, Yaxing; Zhao, Zhiqiang

    Wind power tower drum is one of the key components of the wind power equipment. Whether the wind tower drum performs safety directly affects the efficiency, life, and performance of wind power equipment. Wind power tower drum in the process of manufacture, installation, and operation may lead to injury, and the wind load and gravity load and long-term factors such as poor working environment under the action of crack initiation or distortion, which eventually result in the instability or crack of the wind power tower drum and cause huge economic losses. Thus detecting the wind power tower drum crack damage and instability is especially important. In this chapter, acoustic emission is used to monitor the whole process of wind power tower drum material Q345E steel tensile test at first, and processing and analysis tensile failure signal of the material. And then based on the acoustic emission testing technology to the dynamic monitoring of wind power tower drum, the overall detection and evaluation of the existence of active defects in the whole structure, and the acoustic emission signals collected for processing and analysis, we could preliminarily master the wind tower drum mechanism of acoustic emission source. The acoustic emission is a kind of online, efficient, and economic method, which has very broad prospects for work. The editorial committee of nondestructive testing qualification and certification of personnel teaching material of science and technology industry of national defense, "Acoustic emission testing" (China Machine Press, 2005.1).

  3. Monitoring and analysis of air emissions based on condition models derived from process history

    Directory of Open Access Journals (Sweden)

    M. Liukkonen

    2016-12-01

    Full Text Available Evaluation of online information on operating conditions is necessary when reducing air emissions in energy plants. In this respect, automated monitoring and control are of primary concern, particularly in biomass combustion. As monitoring of emissions in power plants is ever more challenging because of low-grade fuels and fuel mixtures, new monitoring applications are needed to extract essential information from the large amount of measurement data. The management of emissions in energy boilers lacks economically efficient, fast, and competent computational systems that could support decision-making regarding the improvement of emission efficiency. In this paper, a novel emission monitoring platform based on the self-organizing map method is presented. The system is capable, not only of visualizing the prevailing status of the process and detecting problem situations (i.e. increased emission release rates, but also of analyzing these situations automatically and presenting factors potentially affecting them. The system is demonstrated using measurement data from an industrial circulating fluidized bed boiler fired by forest residue as the primary fuel and coal as the supporting fuel.

  4. Programmable thermal emissivity structures based on bioinspired self-shape materials

    Science.gov (United States)

    Athanasopoulos, N.; Siakavellas, N. J.

    2015-12-01

    Programmable thermal emissivity structures based on the bioinspired self-shape anisotropic materials were developed at macro-scale, and further studied theoretically at smaller scale. We study a novel concept, incorporating materials that are capable of transforming their shape via microstructural rearrangements under temperature stimuli, while avoiding the use of exotic shape memory materials or complex micro-mechanisms. Thus, programmed thermal emissivity behaviour of a surface is achievable. The self-shape structure reacts according to the temperature of the surrounding environment or the radiative heat flux. A surface which incorporates self-shape structures can be designed to quickly absorb radiative heat energy at low temperature levels, but is simultaneously capable of passively controlling its maximum temperature in order to prevent overheating. It resembles a “game” of colours, where two or more materials coexist with different values of thermal emissivity/ absorptivity/ reflectivity. The transformation of the structure conceals or reveals one of the materials, creating a surface with programmable - and therefore, variable- effective thermal emissivity. Variable thermal emissivity surfaces may be developed with a total hemispherical emissivity ratio (ɛEff_H/ɛEff_L) equal to 28.

  5. A new method for odour impact assessment based on spatial and temporal analyses of community response

    International Nuclear Information System (INIS)

    Henshaw, P.; Nicell, J.; Sikdar, A.

    2002-01-01

    Odorous emission from stationary sources account for the majority of air pollution complaints to regulatory agencies. Sometimes regulators rely on nuisance provisions of common law to assess odour impact, which is highly subjective. The other commonly used approach, the dilution-to-threshold principle, assumes that an odour is a problem simply if detected, without regard to the fact that a segment of the population can detect the odour at concentrations below the threshold. The odour impact model (OIM) represents a significant improvement over current methods for quantifying odours by characterizing the dose-response relationship of the odour. Dispersion modelling can be used in conjunction with the OIM to estimate the probability of response in the surrounding vicinity, taking into account the local meteorological conditions. The objective of this research is to develop an objective method of assessing the impact of odorous airborne emissions. To this end, several metrics were developed to quantify the impact of an odorous stationary source on the surrounding community. These 'odour impact parameters' are: maximum concentration, maximum probability of response, footprint area, probability-weighted footprint area and the number of people responding to the odour. These impact parameters were calculated for a stationary odour source in Canada. Several remediation scenarios for reducing the odour impact were proposed and their effect on the impact parameters calculated. (author)

  6. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    Science.gov (United States)

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2018-06-01

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions

  7. Opportunities for reducing environmental emissions from forage-based dairy farms

    Directory of Open Access Journals (Sweden)

    Tom Misselbrook

    2013-03-01

    Full Text Available Modern dairy production is inevitably associated with impacts to the environment and the challenge for the industry today is to increase production to meet growing global demand while minimising emissions to the environment. Negative environmental impacts include gaseous emissions to the atmosphere, of ammonia from livestock manure and fertiliser use, of methane from enteric fermentation and manure management, and of nitrous oxide from nitrogen applications to soils and from manure management. Emissions to water include nitrate, ammonium, phosphorus, sediment, pathogens and organic matter, deriving from nutrient applications to forage crops and/or the management of grazing livestock. This paper reviews the sources and impacts of such emissions in the context of a forage-based dairy farm and considers a number of potential mitigation strategies, giving some examples using the farm-scale model SIMSDAIRY. Most of the mitigation measures discussed are associated with systemic improvements in the efficiency of production in dairy systems. Important examples of mitigations include: improvements to dairy herd fertility, that can reduce methane and ammonia emissions by up to 24 and 17%, respectively; diet modification such as the use of high sugar grasses for grazing, which are associated with reductions in cattle N excretion of up to 20% (and therefore lower N losses to the environment and potentially lower methane emissions, or reducing the crude protein content of the dairy cow diet through use of maize silage to reduce N excretion and methane emissions; the use of nitrification inhibitors with fertiliser and slurry applications to reduce nitrous oxide emissions and nitrate leaching by up to 50%. Much can also be achieved through attention to the quantity, timing and method of application of nutrients to forage crops and utilising advances made through genetic improvements.

  8. Upward revision of global fossil fuel methane emissions based on isotope database.

    Science.gov (United States)

    Schwietzke, Stefan; Sherwood, Owen A; Bruhwiler, Lori M P; Miller, John B; Etiope, Giuseppe; Dlugokencky, Edward J; Michel, Sylvia Englund; Arling, Victoria A; Vaughn, Bruce H; White, James W C; Tans, Pieter P

    2016-10-06

    Methane has the second-largest global radiative forcing impact of anthropogenic greenhouse gases after carbon dioxide, but our understanding of the global atmospheric methane budget is incomplete. The global fossil fuel industry (production and usage of natural gas, oil and coal) is thought to contribute 15 to 22 per cent of methane emissions to the total atmospheric methane budget. However, questions remain regarding methane emission trends as a result of fossil fuel industrial activity and the contribution to total methane emissions of sources from the fossil fuel industry and from natural geological seepage, which are often co-located. Here we re-evaluate the global methane budget and the contribution of the fossil fuel industry to methane emissions based on long-term global methane and methane carbon isotope records. We compile the largest isotopic methane source signature database so far, including fossil fuel, microbial and biomass-burning methane emission sources. We find that total fossil fuel methane emissions (fossil fuel industry plus natural geological seepage) are not increasing over time, but are 60 to 110 per cent greater than current estimates owing to large revisions in isotope source signatures. We show that this is consistent with the observed global latitudinal methane gradient. After accounting for natural geological methane seepage, we find that methane emissions from natural gas, oil and coal production and their usage are 20 to 60 per cent greater than inventories. Our findings imply a greater potential for the fossil fuel industry to mitigate anthropogenic climate forcing, but we also find that methane emissions from natural gas as a fraction of production have declined from approximately 8 per cent to approximately 2 per cent over the past three decades.

  9. A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

    Science.gov (United States)

    2016-09-01

    performance study of these algorithms in the particular problem of analysing backscatter signals from rotating blades. The report is organised as follows...provide further insight into the behaviour of the techniques. Here, the algorithms for MP, OMP, CGP, gOMP and ROMP terminate when 10 atoms are

  10. Conformational determination of [Leu]enkephalin based on theoretical and experimental VA and VCD spectral analyses

    DEFF Research Database (Denmark)

    Abdali, Salim; Jalkanen, Karl J.; Cao, X.

    2004-01-01

    Conformational determination of [Leu]enkephalin in DMSO-d6 is carried out using VA and VCD spectral analyses. Conformational energies, vibrational frequencies and VA and VCD intensities are calculated using DFT at B3LYP/6-31G* level of theory. Comparison between the measured spectra...

  11. The study of envorinmental state on road intersections based on emissions of car engines in Zhytomyr

    Directory of Open Access Journals (Sweden)

    Titarenko V.

    2016-08-01

    Full Text Available This paper presents results of the studies of environmental conditions on road intersections based on car engine emissions in Zhitomir. Statistical methods were used to determine the intensity of traffic flows based on transport emissions according to video monitoring records. Intersections with the highest intensity of traffic flows, where there is a high risk of car emissions accumulation above the applicable standard, were chosen for the assessment of ecological sustainability. Calculation of pollution was done for the intersection of Nebesnoi Sotni street and M. Grushevskogo street. The results of calculation were compared with the limit of affordable concentration and conclusions that emphasize difficulty of the problem and the emergence of its solving. The system of ecological safety in Zhitomir requires significant improvements taking into account the exceeding of factual values of pollution over normative values. Recommendations concerning of ecological sustainability of intersections can be developed through the optimization of traffic flows with the use of intellectual traffic systems.

  12. An FBG acoustic emission source locating system based on PHAT and GA

    Science.gov (United States)

    Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun

    2017-09-01

    Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.

  13. Diagnostic of the temperature and differential emission measure (DEM based on Hinode/XRT data

    Directory of Open Access Journals (Sweden)

    P. Rudawy

    2008-10-01

    Full Text Available We discuss here various methodologies and an optimal strategy of the temperature and emission measure diagnostics based on Hinode X-Ray Telescope data. As an example of our results we present the determination of the temperature distribution of the X-rays emitting plasma using a filters ratio method and three various methods of the calculation of the differential emission measure (DEM. We have found that all these methods give results similar to the two filters ratio method. Additionally, all methods of the DEM calculation gave similar solutions. We can state that the majority of the pairs of the Hinode filters allows one to derive the temperature and emission measure in the isothermal plasma approximation using standard diagnostics based on the two filters ratio method. In cases of strong flares one can also expect good conformity of the results obtained using a Withbroe – Sylwester, genetic algorithm and least-squares methods of the DEM evaluation.

  14. Vehicle-specific emissions modeling based upon on-road measurements.

    Science.gov (United States)

    Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M

    2010-05-01

    Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.

  15. Sensitivity analyses of woody species exposed to air pollution based on ecophysiological measurements.

    Science.gov (United States)

    Wen, Dazhi; Kuang, Yuanwen; Zhou, Guoyi

    2004-01-01

    Air pollution has been of a major problem in the Pearl River Delta of south China, particularly during the last two decades. Emissions of air pollutants from industries have already led to damages in natural communities and environments in a wide range of the Delta area. Leaf parameters such as chlorophyll fluorescence, leaf area (LA), dry weight (DW) and leaf mass per area (LMA) had once been used as specific indexes of environmental stress. This study aims to determine in situ if the daily variation of chlorophyll fluorescence and other ecophysiological parameters in five seedlings of three woody species, Ilex rotunda, Ficus microcarpa and Machilus chinensis, could be used alone or in combination with other measurements for sensitivity indexes to make diagnoses under air pollution stress and, hence, to choose the correct tree species for urban afforestation in the Delta area. Five seedlings of each species were transplanted in pot containers after their acclimation under shadowing conditions. Chlorophyll fluorescence measurements were made in situ by a portable fluorometer (OS-30, Opti-sciences, U.S.A). Ten random samples of leaves were picked from each species for LA measurements by area-meter (CI-203, CID, Inc., U.S.A). DW was determined after the leaf samples were dried to a constant weight at 65 degrees C. LMA was calculated as the ratio of DW/LA. Leaf N content was analyzed according to the Kjeldhal method, and the extraction of pigments was carried out according Lin et al. The daily mean Fv/Fm (Fv is the variable fluorescence and Fm is the maximum fluorescence) analysis showed that Ilex rotunda and Ficus microcarpa were more highly resistant to pollution stress, followed by Machilus chinensis, implying that the efficiency of photosystem II in I. rotunda was less affected by air pollutants than the other two species. Little difference in daily change of Fv/Fm in I. rotunda between the polluted and the clean site was also observed. However, a relatively large

  16. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint

    Science.gov (United States)

    Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.

    2018-05-01

    Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.

  17. Provincial Carbon Emissions Reduction Allocation Plan in China Based on Consumption Perspective

    Directory of Open Access Journals (Sweden)

    Xuecheng Wang

    2018-04-01

    Full Text Available China is a country with substantial differences in economic development, energy consumption mix, resources, and technologies, as well as the development path at the provincial level. Therefore, China’s provinces have different potential and degrees of difficulty to carry out carbon emission reduction (CER requirements. In addition, interprovincial trade, with a large amount of embodied carbon emissions, has become the fastest growing driver of China’s total carbon emissions. A reasonable CER allocation plan is, therefore, crucial for realizing the commitment that China announced in the Paris Agreement. How to determine a fair way to allocate provincial CER duties has become a significant challenge for both policy-makers and researchers. In this paper, ecological network analysis (ENA, combined with a multi-regional input-output model (MRIO, is adopted to build an ecological network of embodied emissions across 30 provinces. Then, by using flow analysis and utility analysis based on the ENA model, the specific relationships among different provinces were determined, and the amount of responsibility that a certain province should take quantified, with respect to the embodied carbon emission (ECE flows from interprovincial trade. As a result, we suggest a new CER allocation plan, based on the detailed data of interprovincial relationships and ECE flows.

  18. Qualitative tissue differentiation by analysing the intensity ratios of atomic emission lines using laser induced breakdown spectroscopy (LIBS): prospects for a feedback mechanism for surgical laser systems.

    Science.gov (United States)

    Kanawade, Rajesh; Mahari, Fanuel; Klämpfl, Florian; Rohde, Maximilian; Knipfer, Christian; Tangermann-Gerk, Katja; Adler, Werner; Schmidt, Michael; Stelzle, Florian

    2015-01-01

    The research work presented in this paper focuses on qualitative tissue differentiation by monitoring the intensity ratios of atomic emissions using 'Laser Induced Breakdown Spectroscopy' (LIBS) on the plasma plume created during laser tissue ablation. The background of this study is to establish a real time feedback control mechanism for clinical laser surgery systems during the laser ablation process. Ex-vivo domestic pig tissue samples (muscle, fat, nerve and skin) were used in this experiment. Atomic emission intensity ratios were analyzed to find a characteristic spectral line for each tissue. The results showed characteristic elemental emission intensity ratios for the respective tissues. The spectral lines and intensity ratios of these specific elements varied among the different tissue types. The main goal of this study is to qualitatively and precisely identify different tissue types for tissue specific laser surgery. © 2015 The Authors. Journal of Biophotonics published by WILEY-VCH Verlag.

  19. Molecular systematics of Indian Alysicarpus (Fabaceae) based on analyses of nuclear ribosomal DNA sequences.

    Science.gov (United States)

    Gholami, Akram; Subramaniam, Shweta; Geeta, R; Pandey, Arun K

    2017-06-01

    Alysicarpus Necker ex Desvaux (Fabaceae, Desmodieae) consists of ~30 species that are distributed in tropical and subtropical regions of theworld. In India, the genus is represented by ca. 18 species, ofwhich seven are endemic. Sequences of the nuclear Internal transcribed spacer from38 accessions representing 16 Indian specieswere subjected to phylogenetic analyses. The ITS sequence data strongly support the monophyly of the genus Alysicarpus. Analyses revealed four major well-supported clades within Alysicarpus. Ancestral state reconstructions were done for two morphological characters, namely calyx length in relation to pod (macrocalyx and microcalyx) and pod surface ornamentation (transversely rugose and nonrugose). The present study is the first report on molecular systematics of Indian Alysicarpus.

  20. Life-Cycle Energy Use and Greenhouse Gas Emissions Analysis for Bio-Liquid Jet Fuel from Open Pond-Based Micro-Algae under China Conditions

    Directory of Open Access Journals (Sweden)

    Xiliang Zhang

    2013-09-01

    Full Text Available A life-cycle analysis (LCA of greenhouse gas (GHG emissions and energy use was performed to study bio-jet fuel (BJF production from micro-algae grown in open ponds under Chinese conditions using the Tsinghua University LCA Model (TLCAM. Attention was paid to energy recovery through biogas production and cogeneration of heat and power (CHP from the residual biomass after oil extraction, including fugitive methane (CH4 emissions during the production of biogas and nitrous oxide (N2O emissions during the use of digestate (solid residue from anaerobic digestion as agricultural fertilizer. Analyses were performed based on examination of process parameters, mass balance conditions, material requirement, energy consumptions and the realities of energy supply and transport in China (i.e., electricity generation and heat supply primarily based on coal, multiple transport modes. Our LCA result of the BJF pathway showed that, compared with the traditional petrochemical pathway, this new pathway will increase the overall fossil energy use and carbon emission by 39% and 70%, respectively, while decrease petroleum consumption by about 84%, based on the same units of energy service. Moreover, the energy conservation and emission reduction benefit of this new pathway may be accomplished by two sets of approaches: wider adoption of low-carbon process fuels and optimization of algae cultivation and harvest, and oil extraction processes.

  1. Energy and carbon emissions analysis and prediction of complex petrochemical systems based on an improved extreme learning machine integrated interpretative structural model

    International Nuclear Information System (INIS)

    Han, Yongming; Zhu, Qunxiong; Geng, Zhiqiang; Xu, Yuan

    2017-01-01

    Highlights: • The ELM integrated ISM (ISM-ELM) method is proposed. • The proposed method is more efficient and accurate than the ELM through the UCI data set. • Energy and carbon emissions analysis and prediction of petrochemical industries based ISM-ELM is obtained. • The proposed method is valid in improving energy efficiency and reducing carbon emissions of ethylene plants. - Abstract: Energy saving and carbon emissions reduction of the petrochemical industry are affected by many factors. Thus, it is difficult to analyze and optimize the energy of complex petrochemical systems accurately. This paper proposes an energy and carbon emissions analysis and prediction approach based on an improved extreme learning machine (ELM) integrated interpretative structural model (ISM) (ISM-ELM). ISM based the partial correlation coefficient is utilized to analyze key parameters that affect the energy and carbon emissions of the complex petrochemical system, and can denoise and reduce dimensions of data to decrease the training time and errors of the ELM prediction model. Meanwhile, in terms of the model accuracy and the training time, the robustness and effectiveness of the ISM-ELM model are better than the ELM through standard data sets from the University of California Irvine (UCI) repository. Moreover, a multi-inputs and single-output (MISO) model of energy and carbon emissions of complex ethylene systems is established based on the ISM-ELM. Finally, detailed analyses and simulations using the real ethylene plant data demonstrate the effectiveness of the ISM-ELM and can guide the improvement direction of energy saving and carbon emissions reduction in complex petrochemical systems.

  2. Performance analyses of naval ships based on engineering level of simulation at the initial design stage

    Directory of Open Access Journals (Sweden)

    Dong-Hoon Jeong

    2017-07-01

    Full Text Available Naval ships are assigned many and varied missions. Their performance is critical for mission success, and depends on the specifications of the components. This is why performance analyses of naval ships are required at the initial design stage. Since the design and construction of naval ships take a very long time and incurs a huge cost, Modeling and Simulation (M & S is an effective method for performance analyses. Thus in this study, a simulation core is proposed to analyze the performance of naval ships considering their specifications. This simulation core can perform the engineering level of simulations, considering the mathematical models for naval ships, such as maneuvering equations and passive sonar equations. Also, the simulation models of the simulation core follow Discrete EVent system Specification (DEVS and Discrete Time System Specification (DTSS formalisms, so that simulations can progress over discrete events and discrete times. In addition, applying DEVS and DTSS formalisms makes the structure of simulation models flexible and reusable. To verify the applicability of this simulation core, such a simulation core was applied to simulations for the performance analyses of a submarine in an Anti-SUrface Warfare (ASUW mission. These simulations were composed of two scenarios. The first scenario of submarine diving carried out maneuvering performance analysis by analyzing the pitch angle variation and depth variation of the submarine over time. The second scenario of submarine detection carried out detection performance analysis by analyzing how well the sonar of the submarine resolves adjacent targets. The results of these simulations ensure that the simulation core of this study could be applied to the performance analyses of naval ships considering their specifications.

  3. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  4. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    OpenAIRE

    Zhigang Zuo; Shuhong Liu; Yizhang Fan; Yulin Wu

    2014-01-01

    It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were...

  5. IMPROVING CONTROL ROOM DESIGN AND OPERATIONS BASED ON HUMAN FACTORS ANALYSES OR HOW MUCH HUMAN FACTORS UPGRADE IS ENOUGH ?

    Energy Technology Data Exchange (ETDEWEB)

    HIGGINS,J.C.; OHARA,J.M.; ALMEIDA,P.

    2002-09-19

    THE JOSE CABRERA NUCLEAR POWER PLANT IS A ONE LOOP WESTINGHOUSE PRESSURIZED WATER REACTOR. IN THE CONTROL ROOM, THE DISPLAYS AND CONTROLS USED BY OPERATORS FOR THE EMERGENCY OPERATING PROCEDURES ARE DISTRIBUTED ON FRONT AND BACK PANELS. THIS CONFIGURATION CONTRIBUTED TO RISK IN THE PROBABILISTIC SAFETY ASSESSMENT WHERE IMPORTANT OPERATOR ACTIONS ARE REQUIRED. THIS STUDY WAS UNDERTAKEN TO EVALUATE THE IMPACT OF THE DESIGN ON CREW PERFORMANCE AND PLANT SAFETY AND TO DEVELOP DESIGN IMPROVEMENTS.FIVE POTENTIAL EFFECTS WERE IDENTIFIED. THEN NUREG-0711 [1], PROGRAMMATIC, HUMAN FACTORS, ANALYSES WERE CONDUCTED TO SYSTEMATICALLY EVALUATE THE CR-LA YOUT TO DETERMINE IF THERE WAS EVIDENCE OF THE POTENTIAL EFFECTS. THESE ANALYSES INCLUDED OPERATING EXPERIENCE REVIEW, PSA REVIEW, TASK ANALYSES, AND WALKTHROUGH SIMULATIONS. BASED ON THE RESULTS OF THESE ANALYSES, A VARIETY OF CONTROL ROOM MODIFICATIONS WERE IDENTIFIED. FROM THE ALTERNATIVES, A SELECTION WAS MADE THAT PROVIDED A REASONABLEBALANCE BE TWEEN PERFORMANCE, RISK AND ECONOMICS, AND MODIFICATIONS WERE MADE TO THE PLANT.

  6. Activity-based costing evaluation of a [F-18]-fludeoxyglucose positron emission tomography study

    NARCIS (Netherlands)

    Krug, Bruno; Van Zanten, Annie; Pirson, Anne-Sophie; Crott, Ralph; Vander Borght, Thierry

    2009-01-01

    Objective: The aim of the study is to use the activity-based costing approach to give a better insight in the actual cost structure of a positron emission tomography procedure (FDG-PET) by defining the constituting components and by simulating the impact of possible resource or practice changes.

  7. Dietary nitrate supplementation reduces methane emission in beef cattle fed sugarcane-based diets

    NARCIS (Netherlands)

    Hulshof, R.B.A.; Berndt, A.; Gerrits, W.J.J.; Dijkstra, J.; Zijderveld, van S.M.; Newbold, J.R.; Perdok, H.B.

    2012-01-01

    The objective of this study was to determine the effect of dietary nitrate on methane emission and rumen fermentation parameters in Nellore × Guzera (Bos indicus) beef cattle fed a sugarcane based diet. The experiment was conducted with 16 steers weighing 283 ± 49 kg (mean ± SD), 6 rumen cannulated

  8. The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China

    Directory of Open Access Journals (Sweden)

    Ling-Ling Pei

    2018-03-01

    Full Text Available The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China’s pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N model based on the nonlinear least square (NLS method. The Gauss–Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N and the NLS-based TNGM (1, N models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC, and per capita emissions of SO2 and dust, alongside GDP per capita in China during the period 1996–2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N model presents greater precision when forecasting WDPC, SO2 emissions and dust emissions per capita, compared to the traditional GM (1, N model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO2 and dust reduce accordingly.

  9. Activity-based costing evaluation of a [(18)F]-fludeoxyglucose positron emission tomography study.

    Science.gov (United States)

    Krug, Bruno; Van Zanten, Annie; Pirson, Anne-Sophie; Crott, Ralph; Borght, Thierry Vander

    2009-10-01

    The aim of the study is to use the activity-based costing approach to give a better insight in the actual cost structure of a positron emission tomography procedure (FDG-PET) by defining the constituting components and by simulating the impact of possible resource or practice changes. The cost data were obtained from the hospital administration, personnel and vendor interviews as well as from structured questionnaires. A process map separates the process in 16 patient- and non-patient-related activities, to which the detailed cost data are related. One-way sensitivity analyses shows to which degree of uncertainty the different parameters affect the individual cost and evaluate the impact of possible resource or practice changes like the acquisition of a hybrid PET/CT device, the patient throughput or the sales price of a 370MBq (18)F-FDG patient dose. The PET centre spends 73% of time in clinical activities and the resting time after injection of the tracer (42%) is the single largest departmental cost element. The tracer cost and the operational time have the most influence on cost per procedure. The analysis shows a total cost per FDG-PET ranging from 859 Euro for a BGO PET camera to 1142 Euro for a 16 slices PET-CT system, with a distribution of the resource costs in decreasing order: materials (44%), equipment (24%), wage (16%), space (6%) and hospital overhead (10%). The cost of FDG-PET is mainly influenced by the cost of the radiopharmaceutical. Therefore, the latter rather than the operational time should be reduced in order to improve its cost-effectiveness.

  10. Policy applications of a highly resolved spatial and temporal onroad carbon dioxide emissions data product for the U.S.: Analyses and their implications for mitigation

    Science.gov (United States)

    Mendoza Lebrun, Daniel

    Onroad CO2 emissions were analyzed as part of overall GHG emissions, but those studies have suffered from one or more of these five shortcomings: 1) the spatial resolution was coarse, usually encompassing a region, or the entire U.S.; 2) the temporal resolution was coarse (annual or monthly); 3) the study region was limited, usually a metropolitan planning organization (MPO) or state; 4) fuel sales were used as a proxy to quantify fuel consumption instead of focusing on travel; 5) the spatial heterogeneity of fleet and road network composition was not considered and instead national averages are used. Normalized vehicle-type state-level spatial biases range from 2.6% to 8.1%, while the road type classification biases range from -6.3% to 16.8%. These biases are found to cause errors in reduction estimates as large as ±60%, corresponding to ±0.2 MtC, for a national-average emissions mitigation strategy focused on a 10% emissions reduction from a single vehicle class. Temporal analysis shows distinct emissions seasonality that is particularly visible in the northernmost latitudes, demonstrating peak-to-peak deviations from the annual mean of up to 50%. The hourly structure shows peak-to-peak deviation from a weekly average of up to 200% for heavy-duty (HD) vehicles and 140% for light-duty (LD) vehicles. The present study focuses on reduction of travel and fuel economy improvements by putting forth several mitigation scenarios aimed at reducing VMT and increasing vehicle fuel efficiency. It was found that the most effective independent reduction strategies are those that increase fuel efficiency by extending standards proposed by the corporate average fuel economy (CAFE) or reduction of fuel consumption due to price increases. These two strategies show cumulative emissions reductions of approximately 11% and 12%, respectively, from a business as usual (BAU) approach over the 2000-2050 period. The U.S. onroad transportation sector is long overdue a comprehensive study

  11. Cross-border electricity market effects due to price caps in an emission trading system: An agent-based approach

    International Nuclear Information System (INIS)

    Richstein, Jörn C.; Chappin, Emile J.L.; Vries, Laurens J. de

    2014-01-01

    The recent low CO 2 prices in the European Union Emission Trading Scheme (EU ETS) have triggered a discussion whether the EU ETS needs to be adjusted. We study the effects of CO 2 price floors and a price ceiling on the dynamic investment pathway of two interlinked electricity markets (loosely based on Great Britain, which already has introduced a price floor, and on Central Western Europe). Using an agent-based electricity market simulation with endogenous investment and a CO 2 market (including banking), we analyse the cross-border effects of national policies as well as system-wide policy options. A common, moderate CO 2 auction reserve price results in a more continuous decarbonisation pathway. This reduces CO 2 price volatility and the occurrence of carbon shortage price periods, as well as the average cost to consumers. A price ceiling can shield consumers from extreme price shocks. These price restrictions do not cause a large risk of an overall emissions overshoot in the long run. A national price floor lowers the cost to consumers in the other zone; the larger the zone with the price floor, the stronger the effect. Price floors that are too high lead to inefficiencies in investment choices and to higher consumer costs. - Highlights: • Cross-border effects of CO 2 policies were investigated with an agent-based model. • The current EU ETS might cause CO 2 price shocks and CO 2 price volatility. • A CO 2 auction reserve price does not lower welfare, but lowers CO 2 price volatility. • A national CO 2 price floor lowers consumer cost in the other countries. • A CO 2 price ceiling does not lead to an overshoot of emissions

  12. Quantifying the relative contribution of natural gas fugitive emissions to total methane emissions in Colorado, Utah, and Texas using mobile isotopic methane analysis based on Cavity Ringdown Spectroscopy

    Science.gov (United States)

    Rella, Chris; Winkler, Renato; Sweeney, Colm; Karion, Anna; Petron, Gabrielle; Crosson, Eric

    2014-05-01

    Fugitive emissions of methane into the atmosphere are a major concern facing the natural gas production industry. Because methane is more energy-rich than coal per kg of carbon dioxide emitted into the atmosphere, it represents an attractive alternative to coal for electricity generation, provided that the fugitive emissions of methane are kept under control. A key step in assessing these emissions in a given region is partitioning the observed methane emissions between natural gas fugitive emissions and other sources of methane, such as from landfills or agricultural activities. One effective method for assessing the contribution of these different sources is stable isotope analysis, using the isotopic carbon signature to distinguish between natural gas and landfills or ruminants. We present measurements of methane using a mobile spectroscopic stable isotope analyzer based on cavity ringdown spectroscopy, in three intense natural gas producing regions of the United States: the Denver-Julesburg basin in Colorado, the Uintah basin in Utah, and the Barnett Shale in Texas. Performance of the CRDS isotope analyzer is presented, including precision, calibration, stability, and the potential for measurement bias due to other atmospheric constituents. Mobile isotope measurements of individual sources and in the nocturnal boundary layer have been combined to establish the fraction of the observed methane emissions that can be attributed to natural gas activities. The fraction of total methane emissions in the Denver-Julesburg basin attributed to natural gas emissions is 78 +/- 13%. In the Uinta basin, which has no other significant sources of methane, the fraction is 96% +/- 15%. In addition, results from the Barnett shale are presented, which includes a major urban center (Dallas / Ft. Worth). Methane emissions in this region are spatially highly heterogeneous. Spatially-resolved isotope and concentration measurements are interpreted using a simple emissions model to

  13. Data to calculate emissions intensity for individual beef cattle reared on pasture-based production systems

    Directory of Open Access Journals (Sweden)

    G.A. McAuliffe

    2018-04-01

    Full Text Available With increasing concern about environmental burdens originating from livestock production, the importance of farming system evaluation has never been greater. In order to form a basis for trade-off analysis of pasture-based cattle production systems, liveweight data from 90 Charolais × Hereford-Friesian calves were collected at a high temporal resolution at the North Wyke Farm Platform (NWFP in Devon, UK. These data were then applied to the Intergovernmental Panel on Climate Change (IPCC modelling framework to estimate on-farm methane emissions under three different pasture management strategies, completing a foreground dataset required to calculate emissions intensity of individual beef cattle.

  14. Nine years of global hydrocarbon emissions based on source inversion of OMI formaldehyde observations

    Directory of Open Access Journals (Sweden)

    M. Bauwens

    2016-08-01

    Full Text Available As formaldehyde (HCHO is a high-yield product in the oxidation of most volatile organic compounds (VOCs emitted by fires, vegetation, and anthropogenic activities, satellite observations of HCHO are well-suited to inform us on the spatial and temporal variability of the underlying VOC sources. The long record of space-based HCHO column observations from the Ozone Monitoring Instrument (OMI is used to infer emission flux estimates from pyrogenic and biogenic volatile organic compounds (VOCs on the global scale over 2005–2013. This is realized through the method of source inverse modeling, which consists in the optimization of emissions in a chemistry-transport model (CTM in order to minimize the discrepancy between the observed and modeled HCHO columns. The top–down fluxes are derived in the global CTM IMAGESv2 by an iterative minimization algorithm based on the full adjoint of IMAGESv2, starting from a priori emission estimates provided by the newly released GFED4s (Global Fire Emission Database, version 4s inventory for fires, and by the MEGAN-MOHYCAN inventory for isoprene emissions. The top–down fluxes are compared to two independent inventories for fire (GFAS and FINNv1.5 and isoprene emissions (MEGAN-MACC and GUESS-ES. The inversion indicates a moderate decrease (ca. 20 % in the average annual global fire and isoprene emissions, from 2028 Tg C in the a priori to 1653 Tg C for burned biomass, and from 343 to 272 Tg for isoprene fluxes. Those estimates are acknowledged to depend on the accuracy of formaldehyde data, as well as on the assumed fire emission factors and the oxidation mechanisms leading to HCHO production. Strongly decreased top–down fire fluxes (30–50 % are inferred in the peak fire season in Africa and during years with strong a priori fluxes associated with forest fires in Amazonia (in 2005, 2007, and 2010, bushfires in Australia (in 2006 and 2011, and peat burning in Indonesia (in 2006 and 2009, whereas

  15. Consumption-based Total Suspended Particulate Matter Emissions in Jing-Jin-Ji Area of China

    Science.gov (United States)

    Yang, S.; Chen, S.; Chen, B.

    2014-12-01

    The highly-industrialized regions in China have been facing a serious problem of haze mainly consisted of total suspended particulate matter (TSPM), which has attracted great attention from the public since it directly impairs human health and clinically increases the risks of various respiratory and pulmonary diseases. In this paper, we set up a multi-regional input-output (MRIO) model to analyze the transferring routes of TSPM emissions between regions through trades. TSPM emission from particulate source regions and sectors are identified by analyzing the embodied TSPM flows through monetary flow and carbon footprint. The track of TSPM from origin to end via consumption activities are also revealed by tracing the product supply chain associated with the TSPM emissions. Beijing-Tianjin-Hebei (Jing-Jin-Ji) as the most industrialized area of China is selected for a case study. The result shows that over 70% of TSPM emissions associated with goods consumed in Beijing and Tianjin occurred outside of their own administrative boundaries, implying that Beijing and Tianjin are net embodied TSPM importers. Meanwhile, 63% of the total TSPM emissions in Hebei Province are resulted from the outside demand, indicating Hebei is a net exporter. In addition, nearly half of TSPM emissions are the by-products related to electricity and heating supply and non-metal mineral products in Jing-Jin-Ji Area. Based on the model results, we provided new insights into establishing systemic strategies and identifying mitigation priorities to stem TSPM emissions in China. Keywords: total suspended particulate matter (TSPM); urban ecosystem modeling; multi-regional input-output (MRIO); China

  16. Analysis on influence factors of China's CO2 emissions based on Path-STIRPAT model

    International Nuclear Information System (INIS)

    Li Huanan; Mu Hailin; Zhang Ming; Li Nan

    2011-01-01

    With the intensification of global warming and continued growth in energy consumption, China is facing increasing pressure to cut its CO 2 (carbon dioxide) emissions down. This paper discusses the driving forces influencing China's CO 2 emissions based on Path-STIRPAT model-a method combining Path analysis with STIRPAT (stochastic impacts by regression on population, affluence and technology) model. The analysis shows that GDP per capita (A), industrial structure (IS), population (P), urbanization level (R) and technology level (T) are the main factors influencing China's CO 2 emissions, which exert an influence interactively and collaboratively. The sequence of the size of factors' direct influence on China's CO 2 emission is A>T>P>R>IS, while that of factors' total influence is A>R>P>T>IS. One percent increase in A, IS, P, R and T leads to 0.44, 1.58, 1.31, 1.12 and -1.09 percentage change in CO 2 emission totally, where their direct contribution is 0.45, 0.07, 0.63, 0.08, 0.92, respectively. Improving T is the most important way for CO 2 reduction in China. - Highlights: → We analyze the driving forces influencing China's CO 2 emissions. → Five macro factors like per capita GDP are the main influencing factors. → These factors exert an influence interactively and collaboratively. → Different factors' direct and total influence on China's CO 2 emission is different. → Improving technology level is the most important way for CO 2 reduction in China.

  17. Efficiency and abatement costs of energy-related CO2 emissions in China: A slacks-based efficiency measure

    International Nuclear Information System (INIS)

    Choi, Yongrok; Zhang, Ning; Zhou, P.

    2012-01-01

    Highlights: ► We employ a slacks-based DEA model to estimate the energy efficiency and shadow prices of CO 2 emissions in China. ► The empirical study shows that China was not performing CO 2 -efficiently. ► The average of estimated shadow prices of CO 2 emissions is about $7.2. -- Abstract: This paper uses nonparametric efficiency analysis technique to estimate the energy efficiency, potential emission reductions and marginal abatement costs of energy-related CO 2 emissions in China. We employ a non-radial slacks-based data envelopment analysis (DEA) model for estimating the potential reductions and efficiency of CO 2 emissions for China. The dual model of the slacks-based DEA model is then used to estimate the marginal abatement costs of CO 2 emissions. An empirical study based on China’s panel data (2001–2010) is carried out and some policy implications are also discussed.

  18. Nitrogen gas emissions and nitrate leaching dynamics under different tillage practices based on data synthesis and process-based modeling

    Science.gov (United States)

    Huang, Y.; Ren, W.; Tao, B.; Zhu, X.

    2017-12-01

    Nitrogen losses from the agroecosystems have been of great concern to global changes due to the effects on global warming and water pollution in the form of nitrogen gas emissions (e.g., N2O) and mineral nitrogen leaching (e.g., NO3-), respectively. Conservation tillage, particularly no-tillage (NT), may enhance soil carbon sequestration, soil aggregation and moisture; therefore it has the potential of promoting N2O emissions and reducing NO3- leaching, comparing with conventional tillage (CT). However, associated processes are significantly affected by various factors, such as soil properties, climate, and crop types. How tillage management practices affect nitrogen transformations and fluxes is still far from clear, with inconsistent even opposite results from previous studies. To fill this knowledge gap, we quantitatively investigated gaseous and leaching nitrogen losses from NT and CT agroecosystems based on data synthesis and an improved process-based agroecosystem model. Our preliminary results suggest that NT management is more efficient in reducing NO3- leaching, and meanwhile it simultaneously increases N2O emissions by approximately 10% compared with CT. The effects of NT on N2O emissions and NO3- leaching are highly influenced by the placement of nitrogen fertilizer and are more pronounced in humid climate conditions. The effect of crop types is a less dominant factor in determining N2O and NO3- losses. Both our data synthesis and process-based modeling suggest that the enhanced carbon sequestration capacity from NT could be largely compromised by relevant NT-induced increases in N2O emissions. This study provides the comprehensive quantitative assessment of NT on the nitrogen emissions and leaching in agroecosystems. It provides scientific information for identifying proper management practices for ensuring food security and minimizing the adverse environmental impacts. The results also underscore the importance of suitable nitrogen management in the NT

  19. Atmospheric radiation environment analyses based-on CCD camera at various mountain altitudes and underground sites

    Directory of Open Access Journals (Sweden)

    Li Cavoli Pierre

    2016-01-01

    Full Text Available The purpose of this paper is to discriminate secondary atmospheric particles and identify muons by measuring the natural radiative environment in atmospheric and underground locations. A CCD camera has been used as a cosmic ray sensor. The Low Noise Underground Laboratory of Rustrel (LSBB, France gives the access to a unique low-noise scientific environment deep enough to ensure the screening from the neutron and proton radiative components. Analyses of the charge levels in pixels of the CCD camera induced by radiation events and cartographies of the charge events versus the hit pixel are proposed.

  20. Carbon-14 based determination of the biogenic fraction of industrial CO(2) emissions - application and validation.

    Science.gov (United States)

    Palstra, S W L; Meijer, H A J

    2010-05-01

    The (14)C method is a very reliable and sensitive method for industrial plants, emission authorities and emission inventories to verify data estimations of biogenic fractions of CO(2) emissions. The applicability of the method is shown for flue gas CO(2) samples that have been sampled in 1-h intervals at a coal- and wood-fired power plant and a waste incineration plant. Biogenic flue gas CO(2) fractions of 5-10% and 48-50% have been measured at the power plant and the waste incineration plant, respectively. The reliability of the method has been proven by comparison of the power plant results with those based on carbon mass input and output data of the power plant. At industrial plants with relatively low biogenic CO(2) fraction (<10%) the results need to be corrected for sampled (14)CO(2) from atmospheric air. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Process control of high rate microcrystalline silicon based solar cell deposition by optical emission spectroscopy

    International Nuclear Information System (INIS)

    Kilper, T.; Donker, M.N. van den; Carius, R.; Rech, B.; Braeuer, G.; Repmann, T.

    2008-01-01

    Silicon thin-film solar cells based on microcrystalline silicon (μc-Si:H) were prepared in a 30 x 30 cm 2 plasma-enhanced chemical vapor deposition reactor using 13.56 or 40.68 MHz plasma excitation frequency. Plasma emission was recorded by optical emission spectroscopy during μc-Si:H absorber layer deposition at deposition rates between 0.5 and 2.5 nm/s. The time course of SiH * and H β emission indicated strong drifts in the process conditions particularly at low total gas flows. By actively controlling the SiH 4 gas flow, the observed process drifts were successfully suppressed resulting in a more homogeneous i-layer crystallinity along the growth direction. In a deposition regime with efficient usage of the process gas, the μc-Si:H solar cell efficiency was enhanced from 7.9 % up to 8.8 % by applying process control

  2. Perturbations of ionosphere-magnetosphere coupling by powerful VLF emissions from ground-based transmitters

    International Nuclear Information System (INIS)

    Belov, A. S.; Markov, G. A.; Ryabov, A. O.; Parrot, M.

    2012-01-01

    The characteristics of the plasma-wave disturbances stimulated in the near-Earth plasma by powerful VLF radiation from ground-based transmitters are investigated. Radio communication VLF transmitters of about 1 MW in power are shown to produce artificial plasma-wave channels (density ducts) in the near-Earth space that originate in the lower ionosphere above the disturbing emission source and extend through the entire ionosphere and magnetosphere of the Earth along the magnetic field lines. Measurements with the onboard equipment of the DEMETER satellite have revealed that under the action of emission from the NWC transmitter, which is one of the most powerful VLF radio transmitters, the generation of quasi-electrostatic (plasma) waves is observed on most of the satellite trajectory along the disturbed magnetic flux tube. This may probably be indicative of stimulated emission of a magnetospheric maser.

  3. Coupling of CORINAIR Data to Cost-effective Emission Reduction Strategies Based on Critical Thresholds

    International Nuclear Information System (INIS)

    Johansson, M.; Guardans, R.; Lindstrom, M.

    1999-12-01

    This report summarizes the results of a workshop held by the participants in the EU/LIFE project: Coupling of CORINAIR data to cost-effective emission reduction strategies based on critical thresholds. The project participants include FEI, Filand, NERI, Denmark, CIEMAT, Spain, Lund Univ. Sweden. EMEP/MSC-W, UN/ECE/WGE/CCE and IIASA. The main objective of the project is to support national activities in assessing the spatial and temporal details of emissions of sulphur, nitrogen oxides, ammonium and volatile organic compounds and the impacts of acidification, eutrophication and ground level ozone. The reproject workshop enabled participants to report preliminary results of the two main tasks, emissions and impacts and to agree on common solutions for the final results. (Author) 11 refs

  4. A process-based emission model of volatile organic compounds from silage sources on farms

    DEFF Research Database (Denmark)

    Bonifacio, H. F.; Rotz, C. A.; Hafner, S. D.

    2017-01-01

    Silage on dairy farms can emit large amounts of volatile organic compounds (VOCs), a precursor in the formation of tropospheric ozone. Because of the challenges associated with direct measurements, process-based modeling is another approach for estimating emissions of air pollutants from sources...... was evaluated using ethanol and methanol emissions measured from conventional silage piles (CSP), silage bags (SB), total mixed rations (TMR), and loose corn silage (LCS) at a commercial dairy farm in central California. With transport coefficients for ethanol refined using experimental data from our previous......% if feeds were delivered as four feedings per day rather than as one. Reducing the exposed face of storage can also be useful. Simulated use of silage bags resulted in 90% and 18% reductions in emissions from the storage face and whole farm, respectively....

  5. Dynamic Energy Consumption and Emission Modelling of Container Terminal based on Multi Agents

    Directory of Open Access Journals (Sweden)

    Hou Jue

    2017-01-01

    Full Text Available Environmental protection and energy saving pressure press the increasing attention of container terminal operators. In order to comply with the more and more strict environmental regulation, reducing energy consumption and air pollution emissions, meanwhile, optimizing the operation efficiency, which, is an urgent problem to container terminal operator of China. This paper based on the characteristic of Container Terminal Operation System (CTOS, which includes several sections of container product processes, consist of berth allocation problem, truck dispatching problem, yard allocation problem and auxiliary process. Dynamic energy consumption and emissions characteristic of each equipment and process is modelled, this paper presents the architecture of CTOS based on the multi agent system with early-warning model, which is based on multi-class support vector machines (SVM. A simulation on container terminal is built on the JADE platform to support the decision-making of container terminal, which can reduce energy consumption and air pollution emissions, allows the container terminal operator to be more flexible in their decision to meet the Emission Control Area regulation and Green Port Plan of China.

  6. Tailoring the chirality of light emission with spherical Si-based antennas.

    Science.gov (United States)

    Zambrana-Puyalto, Xavier; Bonod, Nicolas

    2016-05-21

    Chirality of light is of fundamental importance in several enabling technologies with growing applications in life sciences, chemistry and photodetection. Recently, some attention has been focused on chiral quantum emitters. Consequently, optical antennas which are able to tailor the chirality of light emission are needed. Spherical nanoresonators such as colloids are of particular interest to design optical antennas since they can be synthesized at a large scale and they exhibit good optical properties. Here, we show that these colloids can be used to tailor the chirality of a chiral emitter. To this purpose, we derive an analytic formalism to model the interaction between a chiral emitter and a spherical resonator. We then compare the performances of metallic and dielectric spherical antennas to tailor the chirality of light emission. It is seen that, due to their strong electric dipolar response, metallic spherical nanoparticles spoil the chirality of light emission by yielding achiral fields. In contrast, thanks to the combined excitation of electric and magnetic modes, dielectric Si-based particles feature the ability to inhibit or to boost the chirality of light emission. Finally, it is shown that dual modes in dielectric antennas preserve the chirality of light emission.

  7. Real-time monitoring of emissions from monoethanolamine-based industrial scale carbon capture facilities.

    Science.gov (United States)

    Zhu, Liang; Schade, Gunnar Wolfgang; Nielsen, Claus Jørgen

    2013-12-17

    We demonstrate the capabilities and properties of using Proton Transfer Reaction time-of-flight mass spectrometry (PTR-ToF-MS) to real-time monitor gaseous emissions from industrial scale amine-based carbon capture processes. The benchmark monoethanolamine (MEA) was used as an example of amines needing to be monitored from carbon capture facilities, and to describe how the measurements may be influenced by potentially interfering species in CO2 absorber stack discharges. On the basis of known or expected emission compositions, we investigated the PTR-ToF-MS MEA response as a function of sample flow humidity, ammonia, and CO2 abundances, and show that all can exhibit interferences, thus making accurate amine measurements difficult. This warrants a proper sample pretreatment, and we show an example using a dilution with bottled zero air of 1:20 to 1:10 to monitor stack gas concentrations at the CO2 Technology Center Mongstad (TCM), Norway. Observed emissions included many expected chemical species, dominantly ammonia and acetaldehyde, but also two new species previously not reported but emitted in significant quantities. With respect to concerns regarding amine emissions, we show that accurate amine quantifications in the presence of water vapor, ammonia, and CO2 become feasible after proper sample dilution, thus making PTR-ToF-MS a viable technique to monitor future carbon capture facility emissions, without conventional laborious sample pretreatment.

  8. Effective pollutant emission heights for atmospheric transport modelling based on real-world information

    International Nuclear Information System (INIS)

    Pregger, Thomas; Friedrich, Rainer

    2009-01-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling. - The comprehensive analysis of real-world stack data provides detailed default parameter values for improving vertical emission distribution in atmospheric modelling

  9. Irrigation water quality in southern Mexico City based on bacterial and heavy metal analyses

    Energy Technology Data Exchange (ETDEWEB)

    Solis, C. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico, Apdo Postal 20-364, 01000 Mexico, DF (Mexico)]. E-mail: corina@fisica.unam.mx; Sandoval, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Apdo Postal 70-275, 04510 Mexico, DF (Mexico); Perez-Vega, H. [Ciencias Agropecuarias, Universidad Juarez Autonoma de Tabasco, Ave. Universidad S/N. Zona de la Cultura, 86040 Villa Hermosa, Tabasco (Mexico); Mazari-Hiriart, M. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Apdo Postal 70-275, 04510 Mexico, DF (Mexico)

    2006-08-15

    Xochimilco is located in southern Mexico City and represents the reminiscence of the pre-Columbian farming system, the 'chinampa' agriculture. 'Chinampas' are island plots surrounded by a canal network. At present the area is densely urbanized and populated, with various contaminant sources contributing to the water quality degradation. The canal system is recharged by a combination of treated-untreated wastewater, and precipitation during the rainy season. Over 40 agricultural species, including vegetables, cereals and flowers, are produced in the 'chinampas'. In order to characterize the quality of Xochimilcos' water used for irrigation, spatial and temporal contaminant indicators such as microorganisms and heavy metals were investigated. Bacterial indicators (fecal coliforms, fecal enterococcus) were analyzed by standard analytical procedures, and heavy metals (such as Fe, Cu, Zn and Pb) were analyzed by particle induced X-ray emission (PIXE). The more contaminated sites coincide with the heavily populated areas. Seasonal variation of contaminants was observed, with the higher bacterial counts and heavy metal concentrations reported during the rainy season.

  10. Irrigation water quality in southern Mexico City based on bacterial and heavy metal analyses

    International Nuclear Information System (INIS)

    Solis, C.; Sandoval, J.; Perez-Vega, H.; Mazari-Hiriart, M.

    2006-01-01

    Xochimilco is located in southern Mexico City and represents the reminiscence of the pre-Columbian farming system, the 'chinampa' agriculture. 'Chinampas' are island plots surrounded by a canal network. At present the area is densely urbanized and populated, with various contaminant sources contributing to the water quality degradation. The canal system is recharged by a combination of treated-untreated wastewater, and precipitation during the rainy season. Over 40 agricultural species, including vegetables, cereals and flowers, are produced in the 'chinampas'. In order to characterize the quality of Xochimilcos' water used for irrigation, spatial and temporal contaminant indicators such as microorganisms and heavy metals were investigated. Bacterial indicators (fecal coliforms, fecal enterococcus) were analyzed by standard analytical procedures, and heavy metals (such as Fe, Cu, Zn and Pb) were analyzed by particle induced X-ray emission (PIXE). The more contaminated sites coincide with the heavily populated areas. Seasonal variation of contaminants was observed, with the higher bacterial counts and heavy metal concentrations reported during the rainy season

  11. Irrigation water quality in southern Mexico City based on bacterial and heavy metal analyses

    Science.gov (United States)

    Solís, C.; Sandoval, J.; Pérez-Vega, H.; Mazari-Hiriart, M.

    2006-08-01

    Xochimilco is located in southern Mexico City and represents the reminiscence of the pre-Columbian farming system, the "chinampa" agriculture. "Chinampas" are island plots surrounded by a canal network. At present the area is densely urbanized and populated, with various contaminant sources contributing to the water quality degradation. The canal system is recharged by a combination of treated-untreated wastewater, and precipitation during the rainy season. Over 40 agricultural species, including vegetables, cereals and flowers, are produced in the "chinampas". In order to characterize the quality of Xochimilcos' water used for irrigation, spatial and temporal contaminant indicators such as microorganisms and heavy metals were investigated. Bacterial indicators (fecal coliforms, fecal enterococcus) were analyzed by standard analytical procedures, and heavy metals (such as Fe, Cu, Zn and Pb) were analyzed by particle induced X-ray emission (PIXE). The more contaminated sites coincide with the heavily populated areas. Seasonal variation of contaminants was observed, with the higher bacterial counts and heavy metal concentrations reported during the rainy season.

  12. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  13. Systematics of Plant-Pathogenic and Related Streptomyces Species Based on Phylogenetic Analyses of Multiple Gene Loci

    Science.gov (United States)

    The 10 species of Streptomyces implicated as the etiological agents in scab disease of potatoes or soft rot disease of sweet potatoes are distributed among 7 different phylogenetic clades in analyses based on 16S rRNA gene sequences, but high sequence similarity of this gene among Streptomyces speci...

  14. Identification among morphologically similar Argyreia (Convolvulaceae) based on leaf anatomy and phenetic analyses.

    Science.gov (United States)

    Traiperm, Paweena; Chow, Janene; Nopun, Possathorn; Staples, G; Swangpol, Sasivimon C

    2017-12-01

    The genus Argyreia Lour. is one of the species-rich Asian genera in the family Convolvulaceae. Several species complexes were recognized in which taxon delimitation was imprecise, especially when examining herbarium materials without fully developed open flowers. The main goal of this study is to investigate and describe leaf anatomy for some morphologically similar Argyreia using epidermal peeling, leaf and petiole transverse sections, and scanning electron microscopy. Phenetic analyses including cluster analysis and principal component analysis were used to investigate the similarity of these morpho-types. Anatomical differences observed between the morpho-types include epidermal cell walls and the trichome types on the leaf epidermis. Additional differences in the leaf and petiole transverse sections include the epidermal cell shape of the adaxial leaf blade, the leaf margins, and the petiole transverse sectional outline. The phenogram from cluster analysis using the UPGMA method represented four groups with an R value of 0.87. Moreover, the important quantitative and qualitative leaf anatomical traits of the four groups were confirmed by the principal component analysis of the first two components. The results from phenetic analyses confirmed the anatomical differentiation between the morpho-types. Leaf anatomical features regarded as particularly informative for morpho-type differentiation can be used to supplement macro morphological identification.

  15. Australian coal mine methane emissions mitigation potential using a Stirling engine-based CHP system

    International Nuclear Information System (INIS)

    Meybodi, Mehdi Aghaei; Behnia, Masud

    2013-01-01

    Methane, a major contributor to global warming, is a greenhouse gas emitted from coal mines. Abundance of coal mines and consequently a considerable amount of methane emission requires drastic measures to mitigate harmful effects of coal mining on the environment. One of the commonly adopted methods is to use emitted methane to fuel power generation systems; however, instability of fuel sources hinders the development of systems using conventional prime movers. To address this, application of Stirling engines may be considered. Here, we develop a techno-economic methodology for conducting an optimisation-based feasibility study on the application of Stirling engines as the prime movers of coal mine CHP systems from an economic and an environmental point of view. To examine the impact of environmental policies on the economics of the system, the two commonly implemented ones (i.e. a carbon tax and emissions trading scheme) are considered. The methodology was applied to a local coal mine. The results indicate that incorporating the modelled system not only leads to a substantial reduction in greenhouse gas emissions, but also to improved economics. Further, due to the heavy economic burden, the carbon tax scheme creates great incentive for coal mine industry to address the methane emissions. -- Highlights: •We study the application of Stirling engines in coal mine CHP systems. •We develop a thermo-economic approach based on the net present worth analysis. •We examine the impact of a carbon tax and ETS on the economics of the system. •The modeled system leads to a substantial reduction in greenhouse gas emissions. •Carbon tax provides a greater incentive to address the methane emissions

  16. Distributions of emissions intensity for individual beef cattle reared on pasture-based production systems.

    Science.gov (United States)

    McAuliffe, G A; Takahashi, T; Orr, R J; Harris, P; Lee, M R F

    2018-01-10

    Life Cycle Assessment (LCA) of livestock production systems is often based on inventory data for farms typical of a study region. As information on individual animals is often unavailable, livestock data may already be aggregated at the time of inventory analysis, both across individual animals and across seasons. Even though various computational tools exist to consider the effect of genetic and seasonal variabilities in livestock-originated emissions intensity, the degree to which these methods can address the bias suffered by representative animal approaches is not well-understood. Using detailed on-farm data collected on the North Wyke Farm Platform (NWFP) in Devon, UK, this paper proposes a novel approach of life cycle impact assessment that complements the existing LCA methodology. Field data, such as forage quality and animal performance, were measured at high spatial and temporal resolutions and directly transferred into LCA processes. This approach has enabled derivation of emissions intensity for each individual animal and, by extension, its intra-farm distribution, providing a step towards reducing uncertainty related to agricultural production inherent in LCA studies for food. Depending on pasture management strategies, the total emissions intensity estimated by the proposed method was higher than the equivalent value recalculated using a representative animal approach by 0.9-1.7 kg CO 2 -eq/kg liveweight gain, or up to 10% of system-wide emissions. This finding suggests that emissions intensity values derived by the latter technique may be underestimated due to insufficient consideration given to poorly performing animals, whose emissions becomes exponentially greater as average daily gain decreases. Strategies to mitigate life-cycle environmental impacts of pasture-based beef productions systems are also discussed.

  17. Emissions to the Atmosphere from Amine-Based Post Combustion CO2 Capture Plant - Regulatory Aspects

    International Nuclear Information System (INIS)

    Azzi, Merched; Angove, Dennys; Dave, Narendra; Day, Stuart; Do, Thong; Feron, Paul; Sharma, Sunil; Attalla, Moetaz; Abu Zahra, Mohammad

    2014-01-01

    Amine-based Post Combustion Capture (PCC) of CO 2 is a readily available technology that can be deployed to reduce CO 2 emissions from coal fired power plants. However, PCC plants will likely release small quantities of amine and amine degradation products to the atmosphere along with the treated flue gas. The possible environmental effects of these emissions have been examined through different studies carried out around the world. Based on flue gas from a 400 MW ultra-supercritical coal fired power plant Aspen-Plus PCC process simulations were used to predict the potential atmospheric emissions from the plant. Different research initiatives carried out in this area have produced new knowledge that has significantly reduced the risk perception for the release of amine and amine degradation products to the atmosphere. In addition to the reduction of the CO 2 emissions, the PCC technology will also help in reducing SO x and NO 2 emissions. However, some other pollutants such as NH 3 and aerosols will increase if appropriate control technologies are not adopted. To study the atmospheric photo-oxidation of amines, attempts are being made to develop chemical reaction schemes that can be used for air quality assessment. However, more research is still required in this area to estimate the reactivity of amino solvents in the presence of other pollutants such as NO x and other volatile organic compounds in the background air. Current air quality guidelines may need to be updated to include limits for the additional pollutants such as NH 3 , nitrosamines and nitramines once more information related to their emissions is available. This paper focuses on describing the predicted concentrations of major pollutants that are expected to be released from a coal fired power plant obtained by ASPEN-Plus PCC process simulations in terms of current air quality regulations and other regulatory aspects. (authors)

  18. X-ray-based attenuation correction for positron emission tomography/computed tomography scanners.

    Science.gov (United States)

    Kinahan, Paul E; Hasegawa, Bruce H; Beyer, Thomas

    2003-07-01

    A synergy of positron emission tomography (PET)/computed tomography (CT) scanners is the use of the CT data for x-ray-based attenuation correction of the PET emission data. Current methods of measuring transmission use positron sources, gamma-ray sources, or x-ray sources. Each of the types of transmission scans involves different trade-offs of noise versus bias, with positron transmission scans having the highest noise but lowest bias, whereas x-ray scans have negligible noise but the potential for increased quantitative errors. The use of x-ray-based attenuation correction, however, has other advantages, including a lack of bias introduced from post-injection transmission scanning, which is an important practical consideration for clinical scanners, as well as reduced scan times. The sensitivity of x-ray-based attenuation correction to artifacts and quantitative errors depends on the method of translating the CT image from the effective x-ray energy of approximately 70 keV to attenuation coefficients at the PET energy of 511 keV. These translation methods are usually based on segmentation and/or scaling techniques. Errors in the PET emission image arise from positional mismatches caused by patient motion or respiration differences between the PET and CT scans; incorrect calculation of attenuation coefficients for CT contrast agents or metallic implants; or keeping the patient's arms in the field of view, which leads to truncation and/or beam-hardening (or x-ray scatter) artifacts. Proper interpretation of PET emission images corrected for attenuation by using the CT image relies on an understanding of the potential artifacts. In cases where an artifact or bias is suspected, careful inspection of all three available images (CT and PET emission with and without attenuation correction) is recommended. Copyright 2003 Elsevier Inc. All rights reserved.

  19. Activity Based Learning in a Freshman Global Business Course: Analyses of Preferences and Demographic Differences

    Science.gov (United States)

    Levine, Mark F.; Guy, Paul W.

    2007-01-01

    The present study investigates pre-business students' reaction to Activity Based Learning in a lower division core required course entitled Introduction to Global Business in the business curriculum at California State University Chico. The study investigates students' preference for Activity Based Learning in comparison to a more traditional…

  20. Variability Abstractions: Trading Precision for Speed in Family-Based Analyses

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Brabrand, Claus; Wasowski, Andrzej

    2015-01-01

    Family-based (lifted) data-flow analysis for Software Product Lines (SPLs) is capable of analyzing all valid products (variants) without generating any of them explicitly. It takes as input only the common code base, which encodes all variants of a SPL, and produces analysis results corresponding...

  1. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  2. Constraining atmospheric ammonia emissions through new observations with an open-path, laser-based sensor

    Science.gov (United States)

    Sun, Kang

    As the third most abundant nitrogen species in the atmosphere, ammonia (NH3) is a key component of the global nitrogen cycle. Since the industrial revolution, humans have more than doubled the emissions of NH3 to the atmosphere by industrial nitrogen fixation, revolutionizing agricultural practices, and burning fossil fuels. NH3 is a major precursor to fine particulate matter (PM2.5), which has adverse impacts on air quality and human health. The direct and indirect aerosol radiative forcings currently constitute the largest uncertainties for future climate change predictions. Gas and particle phase NH3 eventually deposits back to the Earth's surface as reactive nitrogen, leading to the exceedance of ecosystem critical loads and perturbation of ecosystem productivity. Large uncertainties still remain in estimating the magnitude and spatiotemporal patterns of NH3 emissions from all sources and over a range of scales. These uncertainties in emissions also propagate to the deposition of reactive nitrogen. To improve our understanding of NH3 emissions, observational constraints are needed from local to global scales. The first part of this thesis is to provide quality-controlled, reliable NH3 measurements in the field using an open-path, quantum cascade laser-based NH3 sensor. As the second and third part of my research, NH3 emissions were quantified from a cattle feedlot using eddy covariance (EC) flux measurements, and the similarities between NH3 turbulent fluxes and those of other scalars (temperature, water vapor, and CO2) were investigated. The fourth part involves applying a mobile laboratory equipped with the open-path NH3 sensor and other important chemical/meteorological measurements to quantify fleet-integrated NH3 emissions from on-road vehicles. In the fifth part, the on-road measurements were extended to multiple major urban areas in both the US and China in the context of five observation campaigns. The results significantly improved current urban NH3

  3. CO2 emissions embodied in China-US trade: Input-output analysis based on the emergy/dollar ratio

    International Nuclear Information System (INIS)

    Du Huibin; Guo Jianghong; Mao Guozhu; Smith, Alexander M.; Wang Xuxu; Wang, Yuan

    2011-01-01

    To gain insight into changes in CO 2 emissions embodied in China-US trade, an input-output analysis based on the emergy/dollar ratio (EDR) is used to estimate embodied CO 2 emissions; a structural decomposition analysis (SDA) is employed to analyze the driving factors for changes in CO 2 emissions embodied in China's exports to the US during 2002-2007. The results of the input-output analysis show that net export of CO 2 emissions increased quickly from 2002 to 2005 but decreased from 2005 to 2007. These trends are due to a reduction in total CO 2 emission intensity, a decrease in the exchange rate, and small imports of embodied CO 2 emissions. The results of the SDA demonstrate that total export volume was the largest driving factor for the increase in embodied CO 2 emissions during 2002-2007, followed by intermediate input structure. Direct CO 2 emissions intensity had a negative effect on changes in embodied CO 2 emissions. The results suggest that China should establish a framework for allocating emission responsibilities, enhance energy efficiency, and improve intermediate input structure. - Highlights: → An input-output analysis based on the emergy/dollar ratio estimated embodied CO 2 . → A structural decomposition analysis analyzed the driving factors. → Net export of CO 2 increased from 2002 to 2005 but decreased from 2005 to 2007. → Total export volume was the largest driving factor. → A framework for allocating emission responsibilities.

  4. Analyses of integrated aircraft cabin contaminant monitoring network based on Kalman consensus filter.

    Science.gov (United States)

    Wang, Rui; Li, Yanxiao; Sun, Hui; Chen, Zengqiang

    2017-11-01

    The modern civil aircrafts use air ventilation pressurized cabins subject to the limited space. In order to monitor multiple contaminants and overcome the hypersensitivity of the single sensor, the paper constructs an output correction integrated sensor configuration using sensors with different measurement theories after comparing to other two different configurations. This proposed configuration works as a node in the contaminant distributed wireless sensor monitoring network. The corresponding measurement error models of integrated sensors are also proposed by using the Kalman consensus filter to estimate states and conduct data fusion in order to regulate the single sensor measurement results. The paper develops the sufficient proof of the Kalman consensus filter stability when considering the system and the observation noises and compares the mean estimation and the mean consensus errors between Kalman consensus filter and local Kalman filter. The numerical example analyses show the effectiveness of the algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Toxicity testing and chemical analyses of recycled fibre-based paper for food contact

    DEFF Research Database (Denmark)

    Binderup, Mona-Lise; Pedersen, Gitte Alsing; Vinggaard, Anne

    2002-01-01

    of different qualities as food-contact materials and to Perform a preliminary evaluation of their suitability from a safety point of view, and, second, to evaluate the use of different in vitro toxicity tests for screening of paper and board. Paper produced from three different categories of recycled fibres (B...... of the paper products were extracted with either 99% ethanol or water. Potential migrants in the extracts were identified and semiquantified by GC-1R-MS or GC-HRMS. In parallel to the chemical analyses, a battery of four different in vitro toxicity tests with different endpoints were applied to the same...... was less cytotoxic than the extracts prepared from paper made from recycled fibres, and extracts prepared from C was the most cytotoxic. None of the extracts showed mutagenic activity No conclusion about the oestrogenic activity could be made, because all extracts were cytotoxic to the test organism (yeast...

  6. Aroma profile of Garnacha Tintorera-based sweet wines by chromatographic and sensorial analyses.

    Science.gov (United States)

    Noguerol-Pato, R; González-Álvarez, M; González-Barreiro, C; Cancho-Grande, B; Simal-Gándara, J

    2012-10-15

    The aroma profiles obtained of three Garnacha Tintorera-based wines were studied: a base wine, a naturally sweet wine, and a mixture of naturally sweet wine with other sweet wine obtained by fortification with spirits. The aroma fingerprint was traced by GC-MS analysis of volatile compounds and by sensorial analysis of odours and tastes. Within the volatiles compounds, sotolon (73 μg/L) and acetoin (122 μg/L) were the two main compounds found in naturally sweet wine. With regards to the odorant series, those most dominant for Garnacha Tintorera base wine were floral, fruity and spicy. Instead, the most marked odorant series affected by off-vine drying of the grapes were floral, caramelized and vegetal-wood. Finally, odorant series affected by the switch-off of alcoholic fermentation with ethanol 96% (v/v) fit for human consumption followed by oak barrel aging were caramelized and vegetal-wood. A partial least square test (PLS-2) was used to detect correlations between sets of sensory data (those obtained with mouth and nose) with the ultimate aim of improving our current understanding of the flavour of Garnacha Tintorera red wines, both base and sweet. Based on the sensory dataset analysis, the descriptors with the highest weight for separating base and sweet wines from Garnacha Tintorera were sweetness, dried fruit and caramel (for sweet wines) vs. bitterness, astringency and geranium (for base wines). Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    Science.gov (United States)

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.

  8. The influence of propylene glycol ethers on base diesel properties and emissions from a diesel engine

    International Nuclear Information System (INIS)

    Gómez-Cuenca, F.; Gómez-Marín, M.; Folgueras-Díaz, M.B.

    2013-01-01

    Highlights: • Effect of propylene glycol ethers on diesel fuel properties. • Effect of these compounds on diesel engine performance and emissions. • Blends with ⩽4 wt.% of oxygen do not change substantially diesel fuel quality. • Blends with ⩽2.5 wt.% of oxygen reduce CO, HC and NOx emissions, but not smoke. • These compounds are helpful to reach a cleaner combustion in a diesel engine. - Abstract: The oxygenated additives propylene glycol methyl ether (PGME), propylene glycol ethyl ether (PGEE), dipropylene glycol methyl ether (DPGME) were studied to determine their influence on both the base diesel fuel properties and the exhaust emissions from a diesel engine (CO, NOx, unburnt hydrocarbons and smoke). For diesel blends with low oxygen content (⩽4.0 wt.%), the addition of these compounds to base diesel fuel decreases aromatic content, kinematic viscosity, cold filter plugging point and Conradson carbon residue. Also, each compound modifies the distillation curve at temperatures below the corresponding oxygenated compound boiling point, the distillate percentage being increased. The blend cetane number depends on the type of propylene glycol ether added, its molecular weight, and the oxygen content of the fuel. The addition of PGME decreased slightly diesel fuel cetane number, while PGEE and DPGME increased it. Base diesel fuel-propylene glycol ether blends with 1.0 and 2.5 wt.% oxygen contents were used in order to determine the performance of the diesel engine and its emissions at both full and medium loads and different engine speeds (1000, 2500 and 4000 rpm). In general, at full load and in comparison with base diesel fuel, the blends show a slight reduction of oxygen-free specific fuel consumption. CO emissions are reduced appreciably for 2.5 wt.% of oxygen blends, mainly for PGEE and DPGME. NOx emissions are reduced slightly, but not the smoke. Unburnt hydrocarbon emissions decrease at 1000 and 2500 rpm, but not at 4000 rpm. At medium load

  9. Development of a Carbon Emission Calculations System for Optimizing Building Plan Based on the LCA Framework

    Directory of Open Access Journals (Sweden)

    Feifei Fu

    2014-01-01

    Full Text Available Life cycle thinking has become widely applied in the assessment for building environmental performance. Various tool are developed to support the application of life cycle assessment (LCA method. This paper focuses on the carbon emission during the building construction stage. A partial LCA framework is established to assess the carbon emission in this phase. Furthermore, five typical LCA tools programs have been compared and analyzed for demonstrating the current application of LCA tools and their limitations in the building construction stage. Based on the analysis of existing tools and sustainability demands in building, a new computer calculation system has been developed to calculate the carbon emission for optimizing the sustainability during the construction stage. The system structure and detail functions are described in this paper. Finally, a case study is analyzed to demonstrate the designed LCA framework and system functions. This case is based on a typical building in UK with different plans of masonry wall and timber frame to make a comparison. The final results disclose that a timber frame wall has less embodied carbon emission than a similar masonry structure. 16% reduction was found in this study.

  10. Emissions Trading

    NARCIS (Netherlands)

    Woerdman, Edwin; Backhaus, Juergen

    2014-01-01

    Emissions trading is a market-based instrument to achieve environmental targets in a cost-effective way by allowing legal entities to buy and sell emission rights. The current international dissemination and intended linking of emissions trading schemes underlines the growing relevance of this

  11. Outstanding field emission properties of wet-processed titanium dioxide coated carbon nanotube based field emission devices

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinzhuo; Ou-Yang, Wei, E-mail: ouyangwei@phy.ecnu.edu.cn; Chen, Xiaohong; Guo, Pingsheng; Piao, Xianqing; Sun, Zhuo [Engineering Research Center for Nanophotonics and Advanced Instrument, Ministry of Education, Department of Physics, East China Normal University, 3663 North Zhongshan Road, Shanghai 200062 (China); Xu, Peng; Wang, Miao [Department of Physics, Zhejiang University, 38 ZheDa Road, Hangzhou 310027 (China); Li, Jun [Department of Electronic Science and Technology, Tongji University, 4800 Caoan Road, Shanghai 201804 (China)

    2015-02-16

    Field emission devices using a wet-processed composite cathode of carbon nanotube films coated with titanium dioxide exhibit outstanding field emission characteristics, including ultralow turn on field of 0.383 V μm{sup −1} and threshold field of 0.657 V μm{sup −1} corresponding with a very high field enhancement factor of 20 000, exceptional current stability, and excellent emission uniformity. The improved field emission properties are attributed to the enhanced edge effect simultaneously with the reduced screening effect, and the lowered work function of the composite cathode. In addition, the highly stable electron emission is found due to the presence of titanium dioxide nanoparticles on the carbon nanotubes, which prohibits the cathode from the influence of ions and free radical created in the emission process as well as residual oxygen gas in the device. The high-performance solution-processed composite cathode demonstrates great potential application in vacuum electronic devices.

  12. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    Energy Technology Data Exchange (ETDEWEB)

    Huang Yafang; Li Jinzeng [National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012 (China); Rector, Travis A. [University of Alaska, 3211 Providence Drive, Anchorage, AK 99508 (United States); Mallamaci, Carlos C., E-mail: ljz@nao.cas.cn [Observatorio Astronomico Felix Aguilar, Universidad Nacional de San Juan (Argentina)

    2013-05-15

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  13. EFFICIENT SELECTION AND CLASSIFICATION OF INFRARED EXCESS EMISSION STARS BASED ON AKARI AND 2MASS DATA

    International Nuclear Information System (INIS)

    Huang Yafang; Li Jinzeng; Rector, Travis A.; Mallamaci, Carlos C.

    2013-01-01

    The selection of young stellar objects (YSOs) based on excess emission in the infrared is easily contaminated by post-main-sequence stars and various types of emission line stars with similar properties. We define in this paper stringent criteria for an efficient selection and classification of stellar sources with infrared excess emission based on combined Two Micron All Sky Survey (2MASS) and AKARI colors. First of all, bright dwarfs and giants with known spectral types were selected from the Hipparcos Catalogue and cross-identified with the 2MASS and AKARI Point Source Catalogues to produce the main-sequence and the post-main-sequence tracks, which appear as expected as tight tracks with very small dispersion. However, several of the main-sequence stars indicate excess emission in the color space. Further investigations based on the SIMBAD data help to clarify their nature as classical Be stars, which are found to be located in a well isolated region on each of the color-color (C-C) diagrams. Several kinds of contaminants were then removed based on their distribution in the C-C diagrams. A test sample of Herbig Ae/Be stars and classical T Tauri stars were cross-identified with the 2MASS and AKARI catalogs to define the loci of YSOs with different masses on the C-C diagrams. Well classified Class I and Class II sources were taken as a second test sample to discriminate between various types of YSOs at possibly different evolutionary stages. This helped to define the loci of different types of YSOs and a set of criteria for selecting YSOs based on their colors in the near- and mid-infrared. Candidate YSOs toward IC 1396 indicating excess emission in the near-infrared were employed to verify the validity of the new source selection criteria defined based on C-C diagrams compiled with the 2MASS and AKARI data. Optical spectroscopy and spectral energy distributions of the IC 1396 sample yield a clear identification of the YSOs and further confirm the criteria defined

  14. Present day engines pollutant emissions: proposed model for refinery bases impact; Emissions de polluants des moteurs actuels: modelisation de l'impact des bases de raffinage

    Energy Technology Data Exchange (ETDEWEB)

    Hochart, N.; Jeuland, N.; Montagne, X. [Institut Francais du Petrole (IFP), Div. Techniques d' Applications Energetiques, 92 - Rueil-Malmaison (France); Raux, S. [Institut Francais du Petrole (IFP), Div. Techniques d' Applications Energetiques, Centre d' Etudes et de Developpement Industriel, Rene Navarre, 69 - Vernaison (France); Belot, G.; Cahill, B. [PSA-Peugiot-Citroen, 92 - La Garenne-Colombes (France); Faucon, R.; Petit, A. [Renault, 91 - Lardy (France); Michon, S. [Renault Trucks Powertrain, 69 - Saint Priest (France)

    2003-07-01

    Air quality improvement, especially in urban areas, is one of the major concerns for the coming years. For this reason, car manufacturers, equipment manufacturers and refiners have explored development issues to comply with increasingly severe anti-pollution requirements. In such a context, the identification of the most promising improvement options is essential. A research program, carried out by IFP (Institut francais du petrole), and supported by the French Ministry of Industry, PSA-Peugeot-Citroen, Renault and RVI (Renault Vehicules Industriels), has been built to study this point. It is based on a 4-year program with different steps focused on new engine technologies which will be available in the next 20 years in order to answer to more and more severe pollutant and CO{sub 2} emissions regulations. This program is divided into three main parts: the first one for Diesel car engines, the second for Diesel truck engines and the third for spark ignition engines. The aim of the work reported here is to characterize the effect of fuel formulation on pollutant emissions and engine tuning for different engine technologies. The originality of this study is to use refinery bases as parameters and not conventional physical or chemical parameters. The tested fuels have been chosen in order to represent the major refinery bases expected to be produced in the near future. These results, expressed with linear correlations between fuel composition and pollutant emissions, will help to give a new orientation to refinery tool. The engines presented in this publication are, for spark ignition engines, an EuroII lean-burn engine (Honda VTEC which equips the Honda Civic) and an EuroIII 1.8 l stoichiometric-running Renault engine which equips the Laguna vehicles, and, for diesel engines, an EuroII Renault Laguna 2.2 l indirect injection diesel engine and an EuroII RVI truck engine. For the fuel formulation, an original approach is proposed: while the classical studies are based

  15. Analysing Test-Takers’ Views on a Computer-Based Speaking Test

    Directory of Open Access Journals (Sweden)

    Marian Amengual-Pizarro

    2017-11-01

    Full Text Available This study examines test-takers’ views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was administered to 80 test-takers who took the APTIS speaking test at the Universidad de Alcalá in April 2016. Results reveal that examinees believe computer-based tests provide a valid measure of oral competence in English and are considered to be an adequate method for the assessment of speaking. Interestingly, the data suggest that personal characteristics of test-takers seem to play a key role in deciding upon the most suitable and reliable delivery mode.

  16. Phylogenetic tree based on complete genomes using fractal and correlation analyses without sequence alignment

    Directory of Open Access Journals (Sweden)

    Zu-Guo Yu

    2006-06-01

    Full Text Available The complete genomes of living organisms have provided much information on their phylogenetic relationships. Similarly, the complete genomes of chloroplasts have helped resolve the evolution of this organelle in photosynthetic eukaryotes. In this review, we describe two algorithms to construct phylogenetic trees based on the theories of fractals and dynamic language using complete genomes. These algorithms were developed by our research group in the past few years. Our distance-based phylogenetic tree of 109 prokaryotes and eukaryotes agrees with the biologists' "tree of life" based on the 16S-like rRNA genes in a majority of basic branchings and most lower taxa. Our phylogenetic analysis also shows that the chloroplast genomes are separated into two major clades corresponding to chlorophytes s.l. and rhodophytes s.l. The interrelationships among the chloroplasts are largely in agreement with the current understanding on chloroplast evolution.

  17. Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM

    Science.gov (United States)

    Cajka, R.; Vaskova, J.; Vasek, J.

    2018-04-01

    For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.

  18. Sediment Characteristics of Mergui Basin, Andaman Sea based on Multi-proxy Analyses

    Directory of Open Access Journals (Sweden)

    Rina Zuraida

    2018-02-01

    Full Text Available This paper presents the characteristics of sediment from core BS-36 (6°55.85’ S and 96°7.48’ E, 1147.1 m water depth that was acquired in the Mergui Basin, Andaman Sea. The analyses involved megascopic description, core scanning by multi-sensor core logger, and carbonate content measurement. The purpose of this study is to determine the physical and chemical characteristics of sediment to infer the depositional environment. The results show that this core can be divided into 5 lithologic units that represent various environmental conditions. The sedimentation of the bottom part, Units V and IV were inferred to be deposited in suboxic to anoxic bottom condition combined with high productivity and low precipitation. Unit III was deposited during high precipitation and oxic condition due to ocean ventilation. In the upper part, Units II and I occurred during higher precipitation, higher carbonate production and suboxic to anoxic condition. Keywords: sediment characteristics, Mergui Basin, Andaman Sea, suboxic, anoxic, oxic, carbonate content

  19. Revised age of deglaciation of Lake Emma based on new radiocarbon and macrofossil analyses

    Science.gov (United States)

    Elias, S.A.; Carrara, P.E.; Toolin, L.J.; Jull, A.J.T.

    1991-01-01

    Previous radiocarbon ages of detrital moss fragments in basal organic sediments of Lake Emma indicated that extensive deglaciation of the San Juan Mountains occurred prior to 14,900 yr B.P. (Carrara et al., 1984). Paleoecological analyses of insect and plant macrofossils from these basal sediments cast doubt on the reliability of the radiocarbon ages. Subsequent accelerator radiocarbon dates of insect fossils and wood fragments indicate an early Holocene age, rather than a late Pleistocene age, for the basal sediments of Lake Emma. These new radiocarbon ages suggest that by at least 10,000 yr B.P. deglaciation of the San Juan Mountains was complete. The insect and plant macrofossils from the basal organic sediments indicate a higher-than-present treeline during the early Holocene. The insect assemblages consisted of about 30% bark beetles, which contrasts markedly with the composition of insects from modern lake sediments and modern specimens collected in the Lake Emma cirque, in which bark beetles comprise only about 3% of the assemblages. In addition, in the fossil assemblages there were a number of flightless insect species (not subject to upslope transport by wind) indicative of coniferous forest environments. These insects were likewise absent in the modern assemblage. ?? 1991.

  20. Is autoimmunology a discipline of its own? A big data-based bibliometric and scientometric analyses.

    Science.gov (United States)

    Watad, Abdulla; Bragazzi, Nicola Luigi; Adawi, Mohammad; Amital, Howard; Kivity, Shaye; Mahroum, Naim; Blank, Miri; Shoenfeld, Yehuda

    2017-06-01

    Autoimmunology is a super-specialty of immunology specifically dealing with autoimmune disorders. To assess the extant literature concerning autoimmune disorders, bibliometric and scientometric analyses (namely, research topics/keywords co-occurrence, journal co-citation, citations, and scientific output trends - both crude and normalized, authors network, leading authors, countries, and organizations analysis) were carried out using open-source software, namely, VOSviewer and SciCurve. A corpus of 169,519 articles containing the keyword "autoimmunity" was utilized, selecting PubMed/MEDLINE as bibliographic thesaurus. Journals specifically devoted to autoimmune disorders were six and covered approximately 4.15% of the entire scientific production. Compared with all the corpus (from 1946 on), these specialized journals have been established relatively few decades ago. Top countries were the United States, Japan, Germany, United Kingdom, Italy, China, France, Canada, Australia, and Israel. Trending topics are represented by the role of microRNAs (miRNAs) in the ethiopathogenesis of autoimmune disorders, contributions of genetics and of epigenetic modifications, role of vitamins, management during pregnancy and the impact of gender. New subsets of immune cells have been extensively investigated, with a focus on interleukin production and release and on Th17 cells. Autoimmunology is emerging as a new discipline within immunology, with its own bibliometric properties, an identified scientific community and specifically devoted journals.

  1. Shielding analysis method applied to nuclear ship 'MUTSU' and its evaluation based on experimental analyses

    International Nuclear Information System (INIS)

    Yamaji, Akio; Miyakoshi, Jun-ichi; Iwao, Yoshiaki; Tsubosaka, Akira; Saito, Tetsuo; Fujii, Takayoshi; Okumura, Yoshihiro; Suzuoki, Zenro; Kawakita, Takashi.

    1984-01-01

    Procedures of shielding analysis are described which were used for the shielding modification design of the Nuclear Ship ''MUTSU''. The calculations of the radiation distribution on board were made using Sn codes ANISN and TWOTRAN, a point kernel code QAD and a Monte Carlo code MORSE. The accuracies of these calculations were investigated through the analysis of various shielding experiments: the shield tank experiment of the Nuclear Ship ''Otto Hahn'', the shielding mock-up experiment for ''MUTSU'' performed in JRR-4, the shielding benchmark experiment using the 16 N radiation facility of AERE Harwell and the shielding effect experiment of the ship structure performed in the training ship ''Shintoku-Maru''. The values calculated by the ANISN agree with the data measured at ''Otto Hahn'' within a factor of 2 for fast neutrons and within a factor of 3 for epithermal and thermal neutrons. The γ-ray dose rates calculated by the QAD agree with the measured values within 30% for the analysis of the experiment in JRR-4. The design values for ''MUTSU'' were determined in consequence of these experimental analyses. (author)

  2. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  3. Analysis of determination modalities concerning the exposure and emission limits values of chemical and radioactive substances; Analyse des modalites de fixation des valeurs limites d'exposition et d'emission pour les substances chimiques et radioactives

    Energy Technology Data Exchange (ETDEWEB)

    Schieber, C.; Schneider, T

    2002-08-01

    This document presents the generic approach adopted by various organizations for the determination of the public exposure limits values to chemical and radioactive substances and for the determination of limits values of chemical products emissions by some installations. (A.L.B.)

  4. Data analyses and modelling for risk based monitoring of mycotoxins in animal feed

    NARCIS (Netherlands)

    Ine van der Fels-Klerx, H.J.; Adamse, Paulien; Punt, Ans; Asselt, van Esther D.

    2018-01-01

    Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study

  5. Handbook of methods for risk-based analyses of technical specifications

    International Nuclear Information System (INIS)

    Samanta, P.K.; Kim, I.S.; Mankamo, T.; Vesely, W.E.

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations

  6. Analysing a Web-Based E-Commerce Learning Community: A Case Study in Brazil.

    Science.gov (United States)

    Joia, Luiz Antonio

    2002-01-01

    Demonstrates the use of a Web-based participative virtual learning environment for graduate students in Brazil enrolled in an electronic commerce course in a Masters in Business Administration program. Discusses learning communities; computer-supported collaborative work and collaborative learning; influences on student participation; the role of…

  7. Handbook of methods for risk-based analyses of technical specifications

    Energy Technology Data Exchange (ETDEWEB)

    Samanta, P.K.; Kim, I.S. [Brookhaven National Lab., Upton, NY (United States); Mankamo, T. [Avaplan Oy, Espoo (Finland); Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States)

    1994-12-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements are based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while a few may not be conducive to safety. The US Nuclear Regulatory Commission (USNRC) Office of Research has sponsored research to develop systematic risk-based methods to improve various aspects of TS requirements. This handbook summarizes these risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), scheduled or preventive maintenance, action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), and management of plant configurations resulting from outages of systems, or components. For each topic, the handbook summarizes analytic methods with data needs, outlines the insights to be gained, lists additional references, and gives examples of evaluations.

  8. Group analyses of connectivity-based cortical parcellation using repeated k-means clustering

    NARCIS (Netherlands)

    Nanetti, Luca; Cerliani, Leonardo; Gazzola, Valeria; Renken, Remco; Keysers, Christian

    2009-01-01

    K-means clustering has become a popular tool for connectivity-based cortical segmentation using Diffusion Weighted Imaging (DWI) data. A sometimes ignored issue is, however, that the output of the algorithm depends on the initial placement of starting points, and that different sets of starting

  9. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Science.gov (United States)

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  10. Analyses of Receptive and Productive Korean EFL Vocabulary: Computer-Based Vocabulary Learning Program

    Science.gov (United States)

    Kim, Scott Sungki

    2013-01-01

    The present research study investigated the effects of 8 versions of a computer-based vocabulary learning program on receptive and productive knowledge levels of college students. The participants were 106 male and 103 female Korean EFL students from Kyungsung University and Kwandong University in Korea. Students who participated in versions of…

  11. A mathematical/physics carbon emission reduction strategy for building supply chain network based on carbon tax policy

    Directory of Open Access Journals (Sweden)

    Li Xueying

    2017-03-01

    Full Text Available Under the background of a low carbon economy, this paper examines the impact of carbon tax policy on supply chain network emission reduction. The integer linear programming method is used to establish a supply chain network emission reduction such a model considers the cost of CO2 emissions, and analyses the impact of different carbon price on cost and carbon emissions in supply chains. The results show that the implementation of a carbon tax policy can reduce CO2 emissions in building supply chain, but the increase in carbon price does not produce a reduction effect, and may bring financial burden to the enterprise. This paper presents a reasonable carbon price range and provides decision makers with strategies towards realizing a low carbon building supply chain in an economical manner.

  12. Quantitative evaluation of time-series GHG emissions by sector and region using consumption-based accounting

    International Nuclear Information System (INIS)

    Homma, Takashi; Akimoto, Keigo; Tomoda, Toshimasa

    2012-01-01

    This study estimates global time-series consumption-based GHG emissions by region from 1990 to 2005, including both CO 2 and non-CO 2 GHG emissions. Estimations are conducted for the whole economy and for two specific sectors: manufacturing and agriculture. Especially in the agricultural sector, it is important to include non-CO 2 GHG emissions because these are the major emissions present. In most of the regions examined, the improvements in GHG intensities achieved in the manufacturing sector are larger than those in the agricultural sector. Compared with developing regions, most developed regions have consistently larger per-capita consumption-based GHG emissions over the whole economy, as well as higher production-based emissions. In the manufacturing sector, differences calculated by subtracting production-based emissions from consumption-based GHG emissions are determined by the regional economic level while, in the agricultural sector, they are dependent on regional production structures that are determined by international trade competitiveness. In the manufacturing sector, these differences are consistently and increasingly positive for the U.S., EU15 and Japan but negative for developing regions. In the agricultural sector, the differences calculated for the major agricultural importers like Japan and the EU15 are consistently positive while those of exporters like the U.S., Australia and New Zealand are consistently negative. - Highlights: ► We evaluate global time-series production-based and consumption-based GHG emissions. ► We focus on both CO 2 and non-CO 2 GHG emissions, broken down by region and by sector. ► Including non-CO 2 GHG emissions is important in agricultural sector. ► In agriculture, differences in accountings are dependent on production structures. ► In manufacturing sector, differences in accountings are determined by economic level.

  13. A new physically-based quantification of marine isoprene and primary organic aerosol emissions

    Directory of Open Access Journals (Sweden)

    N. Meskhidze

    2009-07-01

    Full Text Available The global marine sources of organic carbon (OC are estimated here using a physically-based parameterization for the emission of marine isoprene and primary organic matter. The marine isoprene emission model incorporates new physical parameters such as light sensitivity of phytoplankton isoprene production and dynamic euphotic depth to simulate hourly marine isoprene emissions totaling 0.92 Tg C yr−1. Sensitivity studies using different schemes for the euphotic zone depth and ocean phytoplankton speciation produce the upper and the lower range of marine-isoprene emissions of 0.31 to 1.09 Tg C yr−1, respectively. Established relationships between sea spray fractionation of water-insoluble organic carbon (WIOC and chlorophyll-a concentration are used to estimate the total primary sources of marine sub- and super-micron OC of 2.9 and 19.4 Tg C yr−1, respectively. The consistent spatial and temporal resolution of the two emission types allow us, for the first time, to explore the relative contributions of sub- and super-micron organic matter and marine isoprene-derived secondary organic aerosol (SOA to the total OC fraction of marine aerosol. Using a fixed 3% mass yield for the conversion of isoprene to SOA, our emission simulations show minor (<0.2% contribution of marine isoprene to the total marine source of OC on a global scale. However, our model calculations also indicate that over the tropical oceanic regions (30° S to 30° N, marine isoprene SOA may contribute over 30% of the total monthly-averaged sub-micron OC fraction of marine aerosol. The estimated contribution of marine isoprene SOA to hourly-averaged sub-micron marine OC emission is even higher, approaching 50% over the vast regions of the oceans during the midday hours when isoprene emissions are highest. As it is widely believed that sub-micron OC has the potential to influence the cloud droplet activation of marine aerosols, our

  14. Progress Towards Improved MOPITT-based Biomass Burning Emission Inventories for the Amazon Basin

    Science.gov (United States)

    Deeter, M. N.; Emmons, L. K.; Martinez-Alonso, S.; Wiedinmyer, C.; Arellano, A. F.; Fischer, E. V.; González-Alonso, L.; Val Martin, M.; Gatti, L. V.; Miller, J. B.; Gloor, M.; Domingues, L. G.; Correia, C. S. D. C.

    2016-12-01

    The 17-year long record of carbon monoxide (CO) concentrations from the MOPITT satellite instrument is uniquely suited for studying the interannual variability of biomass burning emissions. Data assimilation methods based on Ensemble Kalman Filtering are currently being developed to infer CO emissions within the Amazon Basin from MOPITT measurements along with additional datasets. The validity of these inversions will depend on the characteristics of the MOPITT CO retrievals (e.g., retrieval biases and vertical resolution) as well as the representation of chemistry and dynamics in the chemical transport model (CAM-Chem) used in the data assimilation runs. For example, the assumed vertical distribution ("injection height") of the biomass burning emissions plays a particularly important role. We will review recent progress made on a project to improve biomass burning emission inventories for the Amazon Basin. MOPITT CO retrievals over the Amazon Basin are first characterized, focusing on the MOPITT Version 6 "multispectral" retrieval product (exploiting both thermal-infrared and near-infrared channels). Validation results based on in-situ vertical profiles measured between 2010 and 2013 are presented for four sites in the Amazon Basin. Results indicate a significant negative bias in MOPITT retrieved lower-tropospheric CO concentrations. The seasonal and geographical variability of smoke injection height over the Amazon Basin is then analyzed using a MISR plume height climatology. This work has led to the development of a new fire emission injection height parameterization that was implemented in CAM-Chem and GEOS-Chem.. Finally, we present initial data assimilation results for the Amazon Basin and evaluate the results using available field campaign measurements.

  15. Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database

    International Nuclear Information System (INIS)

    Lam, Chio; Zhou, Wenxing

    2016-01-01

    This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.

  16. White matter disruption in moderate/severe pediatric traumatic brain injury: Advanced tract-based analyses

    Directory of Open Access Journals (Sweden)

    Emily L. Dennis

    2015-01-01

    Full Text Available Traumatic brain injury (TBI is the leading cause of death and disability in children and can lead to a wide range of impairments. Brain imaging methods such as DTI (diffusion tensor imaging are uniquely sensitive to the white matter (WM damage that is common in TBI. However, higher-level analyses using tractography are complicated by the damage and decreased FA (fractional anisotropy characteristic of TBI, which can result in premature tract endings. We used the newly developed autoMATE (automated multi-atlas tract extraction method to identify differences in WM integrity. 63 pediatric patients aged 8–19 years with moderate/severe TBI were examined with cross sectional scanning at one or two time points after injury: a post-acute assessment 1–5 months post-injury and a chronic assessment 13–19 months post-injury. A battery of cognitive function tests was performed in the same time periods. 56 children were examined in the first phase, 28 TBI patients and 28 healthy controls. In the second phase 34 children were studied, 17 TBI patients and 17 controls (27 participants completed both post-acute and chronic phases. We did not find any significant group differences in the post-acute phase. Chronically, we found extensive group differences, mainly for mean and radial diffusivity (MD and RD. In the chronic phase, we found higher MD and RD across a wide range of WM. Additionally, we found correlations between these WM integrity measures and cognitive deficits. This suggests a distributed pattern of WM disruption that continues over the first year following a TBI in children.

  17. Geology of Southern Guinevere Planitia, Venus, based on analyses of Goldstone radar data

    International Nuclear Information System (INIS)

    Arvidson, R.E.; Plaut, J.J.; Jurgens, R.F.; Saunders, R.S.; Slade, M.A.

    1989-01-01

    The ensemble of 41 backscatter images of Venus acquired by the S Band (12.6 cm) Goldstone radar system covers approx. 35 million km and includes the equatorial portion of Guinevere Planitia, Navka Planitia, Heng-O Chasma, and Tinatin Planitia, and parts of Devana Chasma and Phoebe Regio. The images and associated altimetry data combine relatively high spatial resolution (1 to 10 km) with small incidence angles (less than 10 deg) for regions not covered by either Venera Orbiter or Arecibo radar data. Systematic analyses of the Goldstone data show that: (1) Volcanic plains dominate, including groups of small volcanic constructs, radar bright flows on a NW-SE arm of Phoebe Regio and on Ushas Mons and circular volcano-tectonic depressions; (2) Some of the regions imaged by Goldstone have high radar cross sections, including the flows on Ushas Mons and the NW-SE arm of Phoebe Regio, and several other unnamed hills, ridged terrains, and plains areas; (3) A 1000 km diameter multiringed structure is observed and appears to have a morphology not observed in Venera data (The northern section corresponds to Heng-O Chasma); (4) A 150 km wide, 2 km deep, 1400 km long rift valley with upturned flanks is located on the western flank of Phoebe Regio and extends into Devana Chasma; (5) A number of structures can be discerned in the Goldstone data, mainly trending NW-SE and NE-SW, directions similar to those discerned in Pioneer-Venus topography throughout the equatorial region; and (6) The abundance of circular and impact features is similar to the plains global average defined from Venera and Arecibo data, implying that the terrain imaged by Goldstone has typical crater retention ages, measured in hundreds of millions of years. The rate of resurfacing is less than or equal to 4 km/Ga

  18. Intra-specific genetic relationship analyses of Elaeagnus angustifolia based on RP-HPLC biochemical markers

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Elaeagnus angustifolia Linn. has various ecological, medicinal and economical uses. An approach was established using RP-HPLC (reversed-phase high-performance liquid chromatography) to classify and analyse the intra-specific genetic relationships of seventeen populations of E. angustifolia, collected from the Xinjiang areas of China. Chromatograms of alcohol-soluble proteins produced by seventeen populations ofE. angustifolia, were compared. Each chromatogram of alcohol-soluble proteins came from a single seed of one wild plant only. The results showed that when using a Waters Delta Pak. C18, 5 μm particle size reversed phase column (150 mm×3.9 mm), a linear gradient of 25%~60% solvent B with flow rate of 1 ml/min and run time of 67 min, the chromatography yielded optimum separation ofE. angustifolia alcohol-soluble proteins. Representative peaks in each population were chosen according to peak area and occurrence in every seed. The converted data on the elution peaks of each population were different and could be used to represent those populations. GSC (genetic similarity coefficients) of 41% to 62% showed a medium degree of genetic diversity among the populations in these eco-areas. Cluster analysis showed that the seventeen populations ofE. angustifolia could be divided into six clusters at the GSC=0.535 level and indicated the general and unique biochemical markers of these clusters. We suggest that E. angustifolia distribution in these eco-areas could be classified into six variable species. RP-HPLC was shown to be a rapid, repeatable and reliable method for E. angustifolia classification and identification and for analysis of genetic diversity.

  19. Ecogeographical associations between climate and human body composition: analyses based on anthropometry and skinfolds.

    Science.gov (United States)

    Wells, Jonathan C K

    2012-02-01

    In the 19th century, two "ecogeographical rules" were proposed hypothesizing associations of climate with mammalian body size and proportions. Data on human body weight and relative leg length support these rules; however, it is unknown whether such associations are attributable to lean tissue (the heat-producing component) or fat (energy stores). Data on weight, height, and two skinfold thickness were obtained from the literature for 137 nonindustrialized populations, providing 145 male and 115 female individual samples. A variety of indices of adiposity and lean mass were analyzed. Preliminary analyses indicated secular increases in skinfolds in men but not women, and associations of age and height with lean mass in both sexes. Decreasing annual temperature was associated with increasing body mass index (BMI), and increasing triceps but not subscapular skinfold. After adjusting for skinfolds, decreasing temperature remained associated with increasing BMI. These results indicate that colder environments favor both greater peripheral energy stores, and greater lean mass. Contrasting results for triceps and subscapular skinfolds might be due to adaptive strategies either constraining central adiposity in cold environments to reduce cardiovascular risk, or favoring central adiposity in warmer environments to maintain energetic support of the immune system. Polynesian populations were analyzed separately and contradicted all of the climate trends, indicating support for the hypothesis that they are cold-adapted despite occupying a tropical region. It is unclear whether such associations emerge through natural selection or through trans-generational and life-course plasticity. These findings nevertheless aid understanding of the wide variability in human physique and adiposity. Copyright © 2011 Wiley Periodicals, Inc.

  20. A bead-based western for high-throughput cellular signal transduction analyses

    Science.gov (United States)

    Treindl, Fridolin; Ruprecht, Benjamin; Beiter, Yvonne; Schultz, Silke; Döttinger, Anette; Staebler, Annette; Joos, Thomas O.; Kling, Simon; Poetz, Oliver; Fehm, Tanja; Neubauer, Hans; Kuster, Bernhard; Templin, Markus F.

    2016-01-01

    Dissecting cellular signalling requires the analysis of large number of proteins. The DigiWest approach we describe here transfers the western blot to a bead-based microarray platform. By combining gel-based protein separation with immobilization on microspheres, hundreds of replicas of the initial blot are created, thus enabling the comprehensive analysis of limited material, such as cells collected by laser capture microdissection, and extending traditional western blotting to reach proteomic scales. The combination of molecular weight resolution, sensitivity and signal linearity on an automated platform enables the rapid quantification of hundreds of specific proteins and protein modifications in complex samples. This high-throughput western blot approach allowed us to identify and characterize alterations in cellular signal transduction that occur during the development of resistance to the kinase inhibitor Lapatinib, revealing major changes in the activation state of Ephrin-mediated signalling and a central role for p53-controlled processes. PMID:27659302

  1. Space-Based Diagnosis of Surface Ozone Sensitivity to Anthropogenic Emissions

    Science.gov (United States)

    Martin, Randall V.; Fiore, Arlene M.; VanDonkelaar, Aaron

    2004-01-01

    We present a novel capability in satellite remote sensing with implications for air pollution control strategy. We show that the ratio of formaldehyde columns to tropospheric nitrogen dioxide columns is an indicator of the relative sensitivity of surface ozone to emissions of nitrogen oxides (NO(x) = NO + NO2) and volatile organic compounds (VOCs). The diagnosis from these space-based observations is highly consistent with current understanding of surface ozone chemistry based on in situ observations. The satellite-derived ratios indicate that surface ozone is more sensitive to emissions of NO(x) than of VOCs throughout most continental regions of the Northern Hemisphere during summer. Exceptions include Los Angeles and industrial areas of Germany. A seasonal transition occurs in the fall when surface ozone becomes less sensitive to NOx and more sensitive to VOCs.

  2. Critical experiments analyses by using 70 energy group library based on ENDF/B-VI

    Energy Technology Data Exchange (ETDEWEB)

    Tahara, Yoshihisa; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center; Huria, H.C.; Ouisloumen, M.

    1998-03-01

    The newly developed 70-group library has been validated by comparing kinf from a continuous energy Monte-Carlo code MCNP and two dimensional spectrum calculation code PHOENIX-CP. The code employs Discrete Angular Flux Method based on Collision Probability. The library has been also validated against a large number of critical experiments and numerical benchmarks for assemblies with MOX and Gd fuels. (author)

  3. Data Analyses and Modelling for Risk Based Monitoring of Mycotoxins in Animal Feed

    Directory of Open Access Journals (Sweden)

    H.J. (Ine van der Fels-Klerx

    2018-01-01

    Full Text Available Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study aimed to prioritize feed products in the Netherlands for deoxynivalenol and aflatoxin B1 monitoring. Historical mycotoxin monitoring results from the period 2007–2016 were combined with data from other sources. Based on occurrence, groundnuts had high priority for aflatoxin B1 monitoring; some feed materials (maize and maize products and several oil seed products and complete/complementary feed excluding dairy cattle and young animals had medium priority; and all other animal feeds and feed materials had low priority. For deoxynivalenol, maize by-products had a high priority, complete and complementary feed for pigs had a medium priority and all other feed and feed materials a low priority. Also including health consequence estimations showed that feed materials that ranked highest for aflatoxin B1 included sunflower seed and palmkernel expeller/extracts and maize. For deoxynivalenol, maize products were ranked highest, followed by various small grain cereals (products; all other feed materials were of lower concern. Results of this study have proven to be useful in setting up the annual risk based control program for mycotoxins in animal feed and feed materials.

  4. Failure probability analyses for PWSCC in Ni-based alloy welds

    International Nuclear Information System (INIS)

    Udagawa, Makoto; Katsuyama, Jinya; Onizawa, Kunio; Li, Yinsheng

    2015-01-01

    A number of cracks due to primary water stress corrosion cracking (PWSCC) in pressurized water reactors and Ni-based alloy stress corrosion cracking (NiSCC) in boiling water reactors have been detected around Ni-based alloy welds. The causes of crack initiation and growth due to stress corrosion cracking include weld residual stress, operating stress, the materials, and the environment. We have developed the analysis code PASCAL-NP for calculating the failure probability and assessment of the structural integrity of cracked components on the basis of probabilistic fracture mechanics (PFM) considering PWSCC and NiSCC. This PFM analysis code has functions for calculating the incubation time of PWSCC and NiSCC crack initiation, evaluation of crack growth behavior considering certain crack location and orientation patterns, and evaluation of failure behavior near Ni-based alloy welds due to PWSCC and NiSCC in a probabilistic manner. Herein, actual plants affected by PWSCC have been analyzed using PASCAL-NP. Failure probabilities calculated by PASCAL-NP are in reasonable agreement with the detection data. Furthermore, useful knowledge related to leakage due to PWSCC was obtained through parametric studies using this code

  5. Financial and Performance Analyses of Microcontroller Based Solar-Powered Autorickshaw for a Developing Country

    Directory of Open Access Journals (Sweden)

    Abu Raihan Mohammad Siddique

    2016-01-01

    Full Text Available This paper presents a case study to examine the economic viability and performance analysis of a microcontroller based solar powered battery operated autorickshaw (m-SBAR, for the developing countries, which is compared with different types of rickshaws such as pedal rickshaw (PR, battery operated autorickshaw (BAR, and solar-powered battery operated autorickshaw (SBAR, available in Bangladesh. The BAR consists of a rickshaw structure, a battery bank, a battery charge controller, a DC motor driver, and a DC motor whereas the proposed m-SBAR contains additional components like solar panel and microcontroller based DC motor driver. The complete design considered the local radiation data and load profile of the proposed m-SBAR. The Levelized Cost of Energy (LCOE analysis, Net Present Worth, payback periods, and Benefit-to-Cost Ratio methods have been used to evaluate the financial feasibility and sensitivity analysis of m-SBAR, grid-powered BAR, and PR. The numerical analysis reveals that LCOE and Benefit-to-Cost Ratio of the proposed m-SBAR are lower compared to the grid-powered BAR. It has also been found that microcontroller based DC motor control circuit reduces battery discharge rate, improves battery life, and controls motor speed efficiency.

  6. Data Analyses and Modelling for Risk Based Monitoring of Mycotoxins in Animal Feed

    Science.gov (United States)

    van der Fels-Klerx, H.J. (Ine); Adamse, Paulien; Punt, Ans; van Asselt, Esther D.

    2018-01-01

    Following legislation, European Member States should have multi-annual control programs for contaminants, such as for mycotoxins, in feed and food. These programs need to be risk based implying the checks are regular and proportional to the estimated risk for animal and human health. This study aimed to prioritize feed products in the Netherlands for deoxynivalenol and aflatoxin B1 monitoring. Historical mycotoxin monitoring results from the period 2007–2016 were combined with data from other sources. Based on occurrence, groundnuts had high priority for aflatoxin B1 monitoring; some feed materials (maize and maize products and several oil seed products) and complete/complementary feed excluding dairy cattle and young animals had medium priority; and all other animal feeds and feed materials had low priority. For deoxynivalenol, maize by-products had a high priority, complete and complementary feed for pigs had a medium priority and all other feed and feed materials a low priority. Also including health consequence estimations showed that feed materials that ranked highest for aflatoxin B1 included sunflower seed and palmkernel expeller/extracts and maize. For deoxynivalenol, maize products were ranked highest, followed by various small grain cereals (products); all other feed materials were of lower concern. Results of this study have proven to be useful in setting up the annual risk based control program for mycotoxins in animal feed and feed materials. PMID:29373559

  7. Emission Characteristics of Gas-Fired Boilers based on Category-Specific Emission Factor from Field Measurements in Beijing, China

    Science.gov (United States)

    Itahashi, S.; Yan, X.; Song, G.; Yan, J.; Xue, Y.

    2017-12-01

    Gas-fired boilers will become the main stationary sources of NOx in Beijing. However, the knowledge of gas-fired boilers in Beijing is limited. In the present study, the emission characteristics of NOx, SO2, and CO from gas-fired boilers in Beijing were established using category-specific emission factors (EFs) from field measurements. In order to obtain category-specific EFs, boilers were classified through influence analysis. Factors such as combustion mode, boiler type, and installed capacity were considered critical for establishing EFs because they play significant roles in pollutant formation. The EFs for NOx, CO, and SO2 ranged from 1.42-6.86 g m-3, 0.05-0.67 g m-3 and 0.03-0.48 g m-3. The emissions of NOx, SO2, and CO for gas-fired boilers in Beijing were 11121 t, 468 t, and 222 t in 2014, respectively. The emissions were spatially allocated into grid cells with a resolution of 1 km × 1 km, and the results indicated that top emitters were in central Beijing. The uncertainties were quantified using a Monte Carlo simulation. The results indicated high uncertainties in CO (-157% to 154%) and SO2 (-127% to 182%) emissions, and relatively low uncertainties (-34% to 34%) in NOx emission. Furthermore, approximately 61.2% and 96.8% of the monitored chamber combustion boilers (CCBs) met the standard limits for NOx and SO2, respectively. Concerning NOx, low-NOx burners and NOx emission control measures are urgently needed for implementing of stricter standards. Adopting terminal control measures is unnecessary for SO2, although its concentration occasionally exceeds standard limits, because reduction of its concentration can be achieved thorough control of the sulfur content of natural gas at a stable low level. Furthermore, the atmospheric combustion boilers (ACBs) should be substituted with CCBs, because ACBs have a higher emission despite lower gross installed capacity. The results of this study will enable in understanding and controlling emissions from gas

  8. Quantifying nitrous oxide emissions from Chinese grasslands with a process-based model

    Directory of Open Access Journals (Sweden)

    F. Zhang

    2010-06-01

    Full Text Available As one of the largest land cover types, grassland can potentially play an important role in the ecosystem services of natural resources in China. Nitrous oxide (N2O is a major greenhouse gas emitted from grasslands. Current N2O inventory at a regional or national level in China relies on the emission factor method, which is based on limited measurements. To improve the accuracy of the inventory by capturing the spatial variability of N2O emissions under the diverse climate, soil and management conditions across China, we adopted an approach by utilizing a process-based biogeochemical model, DeNitrification-DeComposition (DNDC, to quantify N2O emissions from Chinese grasslands. In the present study, DNDC was tested against datasets of N2O fluxes measured at eight grassland sites in China with encouraging results. The validated DNDC was then linked to a GIS database holding spatially differentiated information of climate, soil, vegetation and management at county-level for all the grasslands in the country. Daily weather data for 2000–2007 from 670 meteorological stations across the entire domain were employed to serve the simulations. The modelled results on a national scale showed a clear geographic pattern of N2O emissions. A high-emission strip showed up stretching from northeast to central China, which is consistent with the eastern boundary between the temperate grassland region and the major agricultural regions of China. The grasslands in the western mountain regions, however, emitted much less N2O. The regionally averaged rates of N2O emissions were 0.26, 0.14 and 0.38 kg nitrogen (N ha−1 y−1 for the temperate, montane and tropical/subtropical grasslands, respectively. The annual mean N2O emission from the total 337 million ha of grasslands in China was 76.5 ± 12.8 Gg N for the simulated years.

  9. Initial Provincial Allocation and Equity Evaluation of China’s Carbon Emission Rights—Based on the Improved TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2018-03-01

    Full Text Available As the world’s largest carbon emitter, China considers carbon emissions trading to be an important measure in its national strategy for energy conservation and emissions reduction. The initial allocation of China’s carbon emissions rights at the provincial level is a core issue of carbon emissions trading. A scientific and reasonable distinction between the carbon emission rights of provinces is crucial for China to achieve emissions reduction targets. Based on the idea of multi-objective decision-making, this paper uses the improved Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS method to allocate China’s initial carbon emission rights to the provinces and uses the Gini coefficient sub-group decomposition method to evaluate the fairness of the allocation results. First, the results of a theoretical distribution show that in the initial allocation of carbon emission rights, a large proportion of China’s provinces have large populations and high energy use, such as Shandong Province, Jiangsu Province, Hebei Province and Henan Province; the provinces with a small proportion of the initial allocation of carbon emissions consist of two municipalities, Beijing and Shanghai, as well as Hainan Province, which is dominated by tourism. Overall, the initial allocation of carbon emission rights in the northern and eastern regions constituted the largest proportion, with the south-central region and the northwest region being the second largest and the southwest region being the smallest. Second, the difference between the theoretical allocation and the actual allocation of carbon emission rights in China was clear. The energy consumption of large provinces and provinces dominated by industry generally had a negative difference (the theoretical allocation of carbon emissions was less than the actual value, while Qinghai, dominated by agriculture and animal husbandry, showed a positive balance (the theoretical allocation of

  10. GHG emission quantification for pavement construction projects using a process-based approach

    Directory of Open Access Journals (Sweden)

    Charinee Limsawasd

    2017-03-01

    Full Text Available Climate change and greenhouse gas (GHG emissions have attracted much attention for their impacts upon the global environment. Initiating of new legislation and regulations for control of GHG emissions from the industrial sectors has been applied to address this problem. The transportation industries, which include operation of road pavement and pavement construction equipment, are the highest GHG-emitting sectors. This study presents a novel quantification model of GHG emissions of pavement construction using process-based analysis. The model is composed of five modules that evaluate GHG emissions. These are: material production and acquisition, (2 material transport to a project site, (3 heavy equipment use, (4 on-site machinery use, and, (5 on-site electricity use. The model was applied to a hypothetical pavement project to compare the environmental impacts of flexible and rigid pavement types during construction. The resulting model can be used for evaluation of environmental impacts, as well as for designing and planning highway pavement construction.

  11. First-principles calculations of orientation dependence of Si thermal oxidation based on Si emission model

    Science.gov (United States)

    Nagura, Takuya; Kawachi, Shingo; Chokawa, Kenta; Shirakawa, Hiroki; Araidai, Masaaki; Kageshima, Hiroyuki; Endoh, Tetsuo; Shiraishi, Kenji

    2018-04-01

    It is expected that the off-state leakage current of MOSFETs can be reduced by employing vertical body channel MOSFETs (V-MOSFETs). However, in fabricating these devices, the structure of the Si pillars sometimes cannot be maintained during oxidation, since Si atoms sometimes disappear from the Si/oxide interface (Si missing). Thus, in this study, we used first-principles calculations based on the density functional theory, and investigated the Si emission behavior at the various interfaces on the basis of the Si emission model including its atomistic structure and dependence on Si crystal orientation. The results show that the order in which Si atoms are more likely to be emitted during thermal oxidation is (111) > (110) > (310) > (100). Moreover, the emission of Si atoms is enhanced as the compressive strain increases. Therefore, the emission of Si atoms occurs more easily in V-MOSFETs than in planar MOSFETs. To reduce Si missing in V-MOSFETs, oxidation processes that induce less strain, such as wet or pyrogenic oxidation, are necessary.

  12. Carbon emissions, logistics volume and GDP in China: empirical analysis based on panel data model.

    Science.gov (United States)

    Guo, Xiaopeng; Ren, Dongfang; Shi, Jiaxing

    2016-12-01

    This paper studies the relationship among carbon emissions, GDP, and logistics by using a panel data model and a combination of statistics and econometrics theory. The model is based on the historical data of 10 typical provinces and cities in China during 2005-2014. The model in this paper adds the variability of logistics on the basis of previous studies, and this variable is replaced by the freight turnover of the provinces. Carbon emissions are calculated by using the annual consumption of coal, oil, and natural gas. GDP is the gross domestic product. The results showed that the amount of logistics and GDP have a contribution to carbon emissions and the long-term relationships are different between different cities in China, mainly influenced by the difference among development mode, economic structure, and level of logistic development. After the testing of panel model setting, this paper established a variable coefficient model of the panel. The influence of GDP and logistics on carbon emissions is obtained according to the influence factors among the variables. The paper concludes with main findings and provides recommendations toward rational planning of urban sustainable development and environmental protection for China.

  13. Feasibility Analysis of Sustainability-Based Measures to Reduce VOC Emissions in Office Partition Manufacturing

    Directory of Open Access Journals (Sweden)

    Marc A. Rosen

    2010-02-01

    Full Text Available A feasibility analysis is reported of reduction opportunities for volatile organic compound (VOC emissions in manufacturing office furniture partitions, aimed at contributing to efforts to improve the sustainability of the process. A pollution prevention methodology is utilized. The purpose is to provide practical options for VOC emissions reductions during the manufacturing of office furniture partitions, but the concepts can be generally applied to the wood furniture industry. Baseline VOC emissions for a typical plant are estimated using a mass balance approach. The feasibility analysis expands on a preliminary screening to identify viable pollution prevention options using realistic criteria and weightings, and is based on technical, environmental and economic considerations. The measures deemed feasible include the implementation of several best management practices, ceasing the painting of non-visible parts, switching to hot melt backwrapping glue, application of solvent recycling and modification of the mechanical clip attachment. Implementation, measurement and control plans are discussed for the measures considered feasible, which can enhance the sustainability of the manufacturing of office furniture partitions. Reducing VOC emissions using the measures identified can, in conjunction with other measures, improve the sustainability of the manufacturing process.

  14. Properties and Applications of High Emissivity Composite Films Based on Far-Infrared Ceramic Powder.

    Science.gov (United States)

    Xiong, Yabo; Huang, Shaoyun; Wang, Wenqi; Liu, Xinghai; Li, Houbin

    2017-11-29

    Polymer matrix composite materials that can emit radiation in the far-infrared region of the spectrum are receiving increasing attention due to their ability to significantly influence biological processes. This study reports on the far-infrared emissivity property of composite films based on far-infrared ceramic powder. X-ray fluorescence spectrometry, Fourier transform infrared spectroscopy, thermogravimetric analysis, and X-ray powder diffractometry were used to evaluate the physical properties of the ceramic powder. The ceramic powder was found to be rich in aluminum oxide, titanium oxide, and silicon oxide, which demonstrate high far-infrared emissivity. In addition, the micromorphology, mechanical performance, dynamic mechanical properties, and far-infrared emissivity of the composite were analyzed to evaluate their suitability for strawberry storage. The mechanical properties of the far-infrared radiation ceramic (cFIR) composite films were not significantly influenced ( p ≥ 0.05) by the addition of the ceramic powder. However, the dynamic mechanical analysis (DMA) properties of the cFIR composite films, including a reduction in damping and shock absorption performance, were significant influenced by the addition of the ceramic powder. Moreover, the cFIR composite films showed high far-infrared emissivity, which has the capability of prolonging the storage life of strawberries. This research demonstrates that cFIR composite films are promising for future applications.

  15. Estimation model for evaporative emissions from gasoline vehicles based on thermodynamics.

    Science.gov (United States)

    Hata, Hiroo; Yamada, Hiroyuki; Kokuryo, Kazuo; Okada, Megumi; Funakubo, Chikage; Tonokura, Kenichi

    2018-03-15

    In this study, we conducted seven-day diurnal breathing loss (DBL) tests on gasoline vehicles. We propose a model based on the theory of thermodynamics that can represent the experimental results of the current and previous studies. The experiments were performed using 14 physical parameters to determine the dependence of total emissions on temperature, fuel tank fill, and fuel vapor pressure. In most cases, total emissions after an apparent breakthrough were proportional to the difference between minimum and maximum environmental temperatures during the day, fuel tank empty space, and fuel vapor pressure. Volatile organic compounds (VOCs) were measured using a Gas Chromatography Mass Spectrometer and Flame Ionization Detector (GC-MS/FID) to determine the Ozone Formation Potential (OFP) of after-breakthrough gas emitted to the atmosphere. Using the experimental results, we constructed a thermodynamic model for estimating the amount of evaporative emissions after a fully saturated canister breakthrough occurred, and a comparison between the thermodynamic model and previous models was made. Finally, the total annual evaporative emissions and OFP in Japan were determined and compared by each model. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. The emission function of ground-based light sources: State of the art and research challenges

    Science.gov (United States)

    Solano Lamphar, Héctor Antonio

    2018-05-01

    To understand the night sky radiance generated by the light emissions of urbanised areas, different researchers are currently proposing various theoretical approaches. The distribution of the radiant intensity as a function of the zenith angle is one of the most unknown properties on modelling skyglow. This is due to the collective effects of the artificial radiation emitted from the ground-based light sources. The emission function is a key property in characterising the sky brightness under arbitrary conditions, therefore it is required by modellers, environmental engineers, urban planners, light pollution researchers, and experimentalists who study the diffuse light of the night sky. As a matter of course, the emission function considers the public lighting system, which is in fact the main generator of the skyglow. Still, another class of light-emitting devices are gaining importance since their overuse and the urban sprawl of recent years. This paper will address the importance of the emission function in modelling skyglow and the factors involved in its characterization. On this subject, the author's intention is to organise, integrate, and evaluate previously published research in order to state the progress of current research toward clarifying this topic.

  17. An experimental assessment on the influence of high octane fuels on biofuel based dual fuel engine performance, emission, and combustion

    Directory of Open Access Journals (Sweden)

    Masimalai Senthilkumar

    2017-01-01

    Full Text Available This paper presents an experimental study on the effect of different high octane fuels (such as eucalyptus oil, ethanol, and methanol on engine’s performance behaviour of a biofuel based dual fuel engine. A single cylinder Diesel engine was modified and tested under dual fuel mode of operation. Initially the engine was run using neat diesel, neat mahua oil as fuels. In the second phase, the engine was operated in dual fuel mode by using a specially designed variable jet carburettor to supply the high octane fuels. Engine trials were made at 100% and 40% loads (power outputs with varying amounts of high octane fuels up-to the maximum possible limit. The performance and emission characteristics of the engine were obtained and analysed. Results indicated significant improvement in brake thermal efficiency simultaneous reduction in smoke and NO emissions in dual fuel operation with all the inducted fuels. At 100% load the brake thermal efficiency increased from 25.6% to a maximum of 32.3, 30.5, and 28.4%, respectively, with eucalyptus oil, ethanol, and methanol as primary fuels. Smoke was reduced drastically from 78% with neat mahua oil a minimum of 41, 48, and 53%, respectively, with eucalyptus oil, ethanol, and methanol at the maximum efficiency point. The optimal energy share for the best engine behaviour was found to be 44.6, 27.3, and 23.2%, respectively, for eucalyptus oil, ethanol, and methanol at 100% load. Among the primary fuels tested, eucalyptus oil showed the maximum brake thermal efficiency, minimum smoke and NO emissions and maximum energy replacement for the optimal operation of the engine.

  18. Capturing PM2.5 emissions from 3D printing via nanofiber-based air filter

    OpenAIRE

    Rao, Chengchen; Gu, Fu; Zhao, Peng; Sharmin, Nusrat; Gu, Haibing; Fu, Jianzhong

    2017-01-01

    This study investigated the feasibility of using polycaprolactone (PCL) nanofiber-based air filters to capture PM2.5 particles emitted from fused deposition modeling (FDM) 3D printing. Generation and aggregation of emitted particles were investigated under different testing environments. The results show that: (1) the PCL nanofiber membranes are capable of capturing particle emissions from 3D printing, (2) relative humidity plays a signification role in aggregation of the captured particles, ...

  19. Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines

    Science.gov (United States)

    2017-05-01

    Engineering Chemistry Fundamentals, Vol. 5, No. 3, 1966, pp. 356–363. [14] Burns, R. A., Development of scalar and velocity imaging diagnostics...in an Aero- Engine Model Combustor at Elevated Pressure Using URANS and Finite- Rate Chemistry ,” 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference...FINAL REPORT Development of Demonstrably Predictive Models for Emissions from Alternative Fuels Based Aircraft Engines SERDP Project WP-2151

  20. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  1. Assessing an organizational culture instrument based on the Competing Values Framework: Exploratory and confirmatory factor analyses

    Science.gov (United States)

    Helfrich, Christian D; Li, Yu-Fang; Mohr, David C; Meterko, Mark; Sales, Anne E

    2007-01-01

    Background The Competing Values Framework (CVF) has been widely used in health services research to assess organizational culture as a predictor of quality improvement implementation, employee and patient satisfaction, and team functioning, among other outcomes. CVF instruments generally are presented as well-validated with reliable aggregated subscales. However, only one study in the health sector has been conducted for the express purpose of validation, and that study population was limited to hospital managers from a single geographic locale. Methods We used exploratory and confirmatory factor analyses to examine the underlying structure of data from a CVF instrument. We analyzed cross-sectional data from a work environment survey conducted in the Veterans Health Administration (VHA). The study population comprised all staff in non-supervisory positions. The survey included 14 items adapted from a popular CVF instrument, which measures organizational culture according to four subscales: hierarchical, entrepreneurial, team, and rational. Results Data from 71,776 non-supervisory employees (approximate response rate 51%) from 168 VHA facilities were used in this analysis. Internal consistency of the subscales was moderate to strong (α = 0.68 to 0.85). However, the entrepreneurial, team, and rational subscales had higher correlations across subscales than within, indicating poor divergent properties. Exploratory factor analysis revealed two factors, comprising the ten items from the entrepreneurial, team, and rational subscales loading on the first factor, and two items from the hierarchical subscale loading on the second factor, along with one item from the rational subscale that cross-loaded on both factors. Results from confirmatory factor analysis suggested that the two-subscale solution provides a more parsimonious fit to the data as compared to the original four-subscale model. Conclusion This study suggests that there may be problems applying conventional

  2. Analyses of Crime Patterns in NIBRS Data Based on a Novel Graph Theory Clustering Method: Virginia as a Case Study

    Directory of Open Access Journals (Sweden)

    Peixin Zhao

    2014-01-01

    Full Text Available This paper suggests a novel clustering method for analyzing the National Incident-Based Reporting System (NIBRS data, which include the determination of correlation of different crime types, the development of a likelihood index for crimes to occur in a jurisdiction, and the clustering of jurisdictions based on crime type. The method was tested by using the 2005 assault data from 121 jurisdictions in Virginia as a test case. The analyses of these data show that some different crime types are correlated and some different crime parameters are correlated with different crime types. The analyses also show that certain jurisdictions within Virginia share certain crime patterns. This information assists with constructing a pattern for a specific crime type and can be used to determine whether a jurisdiction may be more likely to see this type of crime occur in their area.

  3. Emission inventory; Inventaire des emissions

    Energy Technology Data Exchange (ETDEWEB)

    Fontelle, J.P. [CITEPA, Centre Interprofessionnel Technique d`Etudes de la Pollution Atmospherique, 75 - Paris (France)

    1997-12-31

    Statistics on air pollutant (sulfur dioxide, nitrogen oxides and ammonium) emissions, acid equivalent emissions and their evolution since 1990 in the various countries of Europe and the USA, are presented. Emission data from the industrial, agricultural, transportation and power sectors are given, and comparisons are carried out between countries based on Gnp and population, pollution import/export fluxes and compliance to the previous emission reduction objectives

  4. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  5. Seismic fragility analyses of nuclear power plant structures based on the recorded earthquake data in Korea

    International Nuclear Information System (INIS)

    Joe, Yang Hee; Cho, Sung Gook

    2003-01-01

    This paper briefly introduces an improved method for evaluating seismic fragilities of components of nuclear power plants in Korea. Engineering characteristics of small magnitude earthquake spectra recorded in the Korean peninsula during the last several years are also discussed in this paper. For the purpose of evaluating the effects of the recorded earthquake on the seismic fragilities of Korean nuclear power plant structures, several cases of comparative studies have been performed. The study results show that seismic fragility analysis based on the Newmark's spectra in Korea might over-estimate the seismic capacities of Korean facilities. (author)

  6. Theoretical and Empirical Analyses of an Improved Harmony Search Algorithm Based on Differential Mutation Operator

    Directory of Open Access Journals (Sweden)

    Longquan Yong

    2012-01-01

    Full Text Available Harmony search (HS method is an emerging metaheuristic optimization algorithm. In this paper, an improved harmony search method based on differential mutation operator (IHSDE is proposed to deal with the optimization problems. Since the population diversity plays an important role in the behavior of evolution algorithm, the aim of this paper is to calculate the expected population mean and variance of IHSDE from theoretical viewpoint. Numerical results, compared with the HSDE, NGHS, show that the IHSDE method has good convergence property over a test-suite of well-known benchmark functions.

  7. A ground-based near-infrared emission spectrum of the exoplanet HD 189733b.

    Science.gov (United States)

    Swain, Mark R; Deroo, Pieter; Griffith, Caitlin A; Tinetti, Giovanna; Thatte, Azam; Vasisht, Gautam; Chen, Pin; Bouwman, Jeroen; Crossfield, Ian J; Angerhausen, Daniel; Afonso, Cristina; Henning, Thomas

    2010-02-04

    Detection of molecules using infrared spectroscopy probes the conditions and compositions of exoplanet atmospheres. Water (H(2)O), methane (CH(4)), carbon dioxide (CO(2)), and carbon monoxide (CO) have been detected in two hot Jupiters. These previous results relied on space-based telescopes that do not provide spectroscopic capability in the 2.4-5.2 microm spectral region. Here we report ground-based observations of the dayside emission spectrum for HD 189733b between 2.0-2.4 microm and 3.1-4.1 microm, where we find a bright emission feature. Where overlap with space-based instruments exists, our results are in excellent agreement with previous measurements. A feature at approximately 3.25 microm is unexpected and difficult to explain with models that assume local thermodynamic equilibrium (LTE) conditions at the 1 bar to 1 x 10(-6) bar pressures typically sampled by infrared measurements. The most likely explanation for this feature is that it arises from non-LTE emission from CH(4), similar to what is seen in the atmospheres of planets in our own Solar System. These results suggest that non-LTE effects may need to be considered when interpreting measurements of strongly irradiated exoplanets.

  8. Profiting from negawatts: Reducing absolute consumption and emissions through a performance-based energy economy

    International Nuclear Information System (INIS)

    Steinberger, Julia K.; Niel, Johan van; Bourg, Dominique

    2009-01-01

    Current energy and GHG emissions policies either focus directly on emissions or promote renewable production and the implementation of specific efficiency measures. Meanwhile, the fundamental structure of the energy market based on profits through energy throughput remains largely unchallenged. This policy oversight prevents the transition to an energy economy in which profits are based on energy services delivered at the lowest energy cost: a performance-based energy economy (PBEE). The PBEE applies the combined concepts of the performance economy and energy services to the energy sector. Energy Service Companies (ESCOs) are discussed as an example of PBEE practices. The implications for energy suppliers and consumers as well as the conditions for PBEE diffusion and consequences for technological change are also explored. The expected environmental, social and economic benefits are described. However, absolute consumption and emissions reductions may prove elusive due to the rebound effect. In order to forestall rebound-led increases, complementary policy measures likely to lead to absolute reductions are required

  9. Profiting from negawatts. Reducing absolute consumption and emissions through a performance-based energy economy

    Energy Technology Data Exchange (ETDEWEB)

    Steinberger, Julia K. [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland); Institute of Social Ecology, Klagenfurt University, Schottenfeldg. 29, A-1070 Vienna (Austria); van Niel, Johan [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland); CREIDD, University of Technology of Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France); Bourg, Dominique [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland)

    2009-01-15

    Current energy and GHG emissions policies either focus directly on emissions or promote renewable production and the implementation of specific efficiency measures. Meanwhile, the fundamental structure of the energy market based on profits through energy throughput remains largely unchallenged. This policy oversight prevents the transition to an energy economy in which profits are based on energy services delivered at the lowest energy cost: a performance-based energy economy (PBEE). The PBEE applies the combined concepts of the performance economy and energy services to the energy sector. Energy Service Companies (ESCOs) are discussed as an example of PBEE practices. The implications for energy suppliers and consumers as well as the conditions for PBEE diffusion and consequences for technological change are also explored. The expected environmental, social and economic benefits are described. However, absolute consumption and emissions reductions may prove elusive due to the rebound effect. In order to forestall rebound-led increases, complementary policy measures likely to lead to absolute reductions are required. (author)

  10. Profiting from negawatts: Reducing absolute consumption and emissions through a performance-based energy economy

    Energy Technology Data Exchange (ETDEWEB)

    Steinberger, Julia K. [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland); Institute of Social Ecology, Klagenfurt University, Schottenfeldg. 29, A-1070 Vienna (Austria)], E-mail: julia.steinberger@uni-klu.ac.at; Niel, Johan van [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland); CREIDD, University of Technology of Troyes, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France); Bourg, Dominique [IPTEH, Quartier Sorge-Bat. Amphipole, University of Lausanne, CH-1015-Lausanne (Switzerland)

    2009-01-15

    Current energy and GHG emissions policies either focus directly on emissions or promote renewable production and the implementation of specific efficiency measures. Meanwhile, the fundamental structure of the energy market based on profits through energy throughput remains largely unchallenged. This policy oversight prevents the transition to an energy economy in which profits are based on energy services delivered at the lowest energy cost: a performance-based energy economy (PBEE). The PBEE applies the combined concepts of the performance economy and energy services to the energy sector. Energy Service Companies (ESCOs) are discussed as an example of PBEE practices. The implications for energy suppliers and consumers as well as the conditions for PBEE diffusion and consequences for technological change are also explored. The expected environmental, social and economic benefits are described. However, absolute consumption and emissions reductions may prove elusive due to the rebound effect. In order to forestall rebound-led increases, complementary policy measures likely to lead to absolute reductions are required.

  11. The Application of Moessbauer Emission Spectroscopy to Industrial Cobalt Based Fischer-Tropsch Catalysts

    International Nuclear Information System (INIS)

    Loosdrecht, J. van de; Berge, P. J. van; Craje, M. W. J.; Kraan, A. M. van der

    2002-01-01

    The application of Moessbauer emission spectroscopy to study cobalt based Fischer-Tropsch catalysts for the gas-to-liquids process was investigated. It was shown that Moessbauer emission spectroscopy could be used to study the oxidation of cobalt as a deactivation mechanism of high loading cobalt based Fischer-Tropsch catalysts. Oxidation was observed under conditions that are in contradiction with the bulk cobalt phase thermodynamics. This can be explained by oxidation of small cobalt crystallites or by surface oxidation. The formation of re-reducible Co 3+ species was observed as well as the formation of irreducible Co 3+ and Co 2+ species that interact strongly with the alumina support. The formation of the different cobalt species depends on the oxidation conditions. Iron was used as a probe nuclide to investigate the cobalt catalyst preparation procedure. A high-pressure Moessbauer emission spectroscopy cell was designed and constructed, which creates the opportunity to study cobalt based Fischer-Tropsch catalysts under realistic synthesis conditions.

  12. Environmental consequence analyses of fish farm emissions related to different scales and exemplified by data from the Baltic--a review.

    Science.gov (United States)

    Gyllenhammar, Andreas; Håkanson, Lars

    2005-08-01

    The aim of this work is to review studies to evaluate how emissions from fish cage farms cause eutrophication effects in marine environments. The focus is on four different scales: (i) the conditions at the site of the farm, (ii) the local scale related to the coastal area where the farm is situated, (iii) the regional scale encompassing many coastal areas and (iv) the international scale including several regional coastal areas. The aim is to evaluate the role of nutrient emissions from fish farms in a general way, but all selected examples come from the Baltic Sea. An important part of this evaluation concerns the method to define the boundaries of a given coastal area. If this is done arbitrarily, one would obtain arbitrary results in the environmental consequence analysis. In this work, the boundary lines between the coast and the sea are drawn using GIS methods (geographical information systems) according to the topographical bottleneck method, which opens a way to determine many fundamental characteristics in the context of mass balance calculations. In mass balance modelling, the fluxes from the fish farm should be compared to other fluxes to, within and from coastal areas. Results collected in this study show that: (1) at the smallest scale (impact areas of fish cage farm often corresponds to the size of a "football field" (50-100 m) if the annual fish production is about 50 ton, (2) at the local scale (1 ha to 100 km2), there exists a simple load diagram (effect-load-sensitivity) to relate the environmental response and effects from a specific load from a fish cage farm. This makes it possible to obtain a first estimate of the maximum allowable fish production in a specific coastal area, (3) at the regional scale (100-10,000 km2), it is possible to create negative nutrient fluxes, i.e., use fish farming as a method to reduce the nutrient loading to the sea. The breaking point is to use more than about 1.1 g wet weight regionally caught wild fish per gram

  13. Augmentation of French grunt diet description using combined visual and DNA-based analyses

    Science.gov (United States)

    Hargrove, John S.; Parkyn, Daryl C.; Murie, Debra J.; Demopoulos, Amanda W.J.; Austin, James D.

    2012-01-01

    Trophic linkages within a coral-reef ecosystem may be difficult to discern in fish species that reside on, but do not forage on, coral reefs. Furthermore, dietary analysis of fish can be difficult in situations where prey is thoroughly macerated, resulting in many visually unrecognisable food items. The present study examined whether the inclusion of a DNA-based method could improve the identification of prey consumed by French grunt, Haemulon flavolineatum, a reef fish that possesses pharyngeal teeth and forages on soft-bodied prey items. Visual analysis indicated that crustaceans were most abundant numerically (38.9%), followed by sipunculans (31.0%) and polychaete worms (5.2%), with a substantial number of unidentified prey (12.7%). For the subset of prey with both visual and molecular data, there was a marked reduction in the number of unidentified sipunculans (visual – 31.1%, combined &ndash 4.4%), unidentified crustaceans (visual &ndash 15.6%, combined &ndash 6.7%), and unidentified taxa (visual &ndash 11.1%, combined &ndash 0.0%). Utilising results from both methodologies resulted in an increased number of prey placed at the family level (visual &ndash 6, combined &ndash 33) and species level (visual &ndash 0, combined &ndash 4). Although more costly than visual analysis alone, our study demonstrated the feasibility of DNA-based identification of visually unidentifiable prey in the stomach contents of fish.

  14. VALUE-BASED MEDICINE AND OPHTHALMOLOGY: AN APPRAISAL OF COST-UTILITY ANALYSES

    Science.gov (United States)

    Brown, Gary C; Brown, Melissa M; Sharma, Sanjay; Brown, Heidi; Smithen, Lindsay; Leeser, David B; Beauchamp, George

    2004-01-01

    ABSTRACT Purpose To ascertain the extent to which ophthalmologic interventions have been evaluated in value-based medicine format. Methods Retrospective literature review. Papers in the healthcare literature utilizing cost-utility analysis were reviewed by researchers at the Center for Value-Based Medicine, Flourtown, Pennsylvania. A literature review of papers addressing the cost-utility analysis of ophthalmologic procedures in the United States over a 12-year period from 1992 to 2003 was undertaken using the National Library of Medicine and EMBASE databases. The cost-utility of ophthalmologic interventions in inflation-adjusted (real) year 2003 US dollars expended per quality-adjusted life-year ($/QALY) was ascertained in all instances. Results A total of 19 papers were found, including a total of 25 interventions. The median cost-utility of ophthalmologic interventions was $5,219/QALY, with a range from $746/QALY to $6.5 million/QALY. Conclusions The majority of ophthalmologic interventions are especially cost-effective by conventional standards. This is because of the substantial value that ophthalmologic interventions confer to patients with eye diseases for the resources expended. PMID:15747756

  15. Group analyses of connectivity-based cortical parcellation using repeated k-means clustering.

    Science.gov (United States)

    Nanetti, Luca; Cerliani, Leonardo; Gazzola, Valeria; Renken, Remco; Keysers, Christian

    2009-10-01

    K-means clustering has become a popular tool for connectivity-based cortical segmentation using Diffusion Weighted Imaging (DWI) data. A sometimes ignored issue is, however, that the output of the algorithm depends on the initial placement of starting points, and that different sets of starting points therefore could lead to different solutions. In this study we explore this issue. We apply k-means clustering a thousand times to the same DWI dataset collected in 10 individuals to segment two brain regions: the SMA-preSMA on the medial wall, and the insula. At the level of single subjects, we found that in both brain regions, repeatedly applying k-means indeed often leads to a variety of rather different cortical based parcellations. By assessing the similarity and frequency of these different solutions, we show that approximately 256 k-means repetitions are needed to accurately estimate the distribution of possible solutions. Using nonparametric group statistics, we then propose a method to employ the variability of clustering solutions to assess the reliability with which certain voxels can be attributed to a particular cluster. In addition, we show that the proportion of voxels that can be attributed significantly to either cluster in the SMA and preSMA is relatively higher than in the insula and discuss how this difference may relate to differences in the anatomy of these regions.

  16. A MULTI-AGENT BASED SOCIAL CRM FRAMEWORK FOR EXTRACTING AND ANALYSING OPINIONS

    Directory of Open Access Journals (Sweden)

    ABDELAZIZ EL FAZZIKI

    2017-08-01

    Full Text Available Social media provide a wide space for people from around the world to communicate, share knowledge and personal experiences. They increasingly become an important data source for opinion mining and sentiment analysis, thanks to shared comments and reviews about products and services. And companies are showing a growing interest to harness their potential, in order to support setting up marketing strategies. Despite the importance of sentiment analysis in decision making, there is a lack of social intelligence integration at the level of customer relationship management systems. Thus, social customer relationship management (SCRM systems have become an interesting research area. However, they need deep analytic techniques to transform the large amount of data “Big Data” into actionable insights. Such systems also require an advanced modelling and data processing methods, and must consider the emerging paradigm related to proactive systems. In this paper, we propose an agent based social framework that extracts and consolidates the reviews expressed via social media, in order to help enterprises know more about customers’ opinions toward a particular product or service. To illustrate our approach, we present the case study of Twitter reviews that we use to extract opinions and sentiment about a set of products using SentiGem API. Data extraction, analysis and storage are performed using a framework based on Hadoop MapReduce and HBase.

  17. Greenhouse Gas Emissions from Agricultural Production

    DEFF Research Database (Denmark)

    Bennetzen, Eskild Hohlmann

    unit. This dissertation presents results and comprehensions from my PhD study on the basis of three papers. The overall aim has been to develop a new identity-based framework, the KPI, to estimate and analyse GHG emissions from agriculture and LUC and apply this on national, regional and global level....... The KPI enables combined analyses of changes in total emissions, emissions per area and emissions per product. Also, the KPI can be used to assess how a change in each GHG emission category affects the change in total emissions; thus pointing to where things are going well and where things are going less...... well in relation to what is actually produced. The KPI framework is scale independent and can be applied at any level from field and farm to global agricultural production. Paper I presents the first attempt to develop the KPI identity framework and, as a case study, GHG emissions from Danish crop...

  18. Presentation of the Results from the Project of Making Base Documents for Low-Emission Development Strategy for Croatia Until 2030 with an Outlook to 2050

    International Nuclear Information System (INIS)

    Jelavic, V.; Delija, V.; Herencic, L.

    2016-01-01

    Paris Climate Agreement is United Nations' Framework Convention on Climate Change way of encouraging countries to prepare Low-Emission development strategies and shows that climate change require long-term development strategies that support sustainable development, with the purpose of limiting the increase of global temperature to the maximal 2 degrees Celsius by the end of the century. The starting point of EU's policy towards low-emission economy is a goal of reducing greenhouse gas emissions by 80-95 percent by 2050. In accordance with that goal, The European Council adopted a climate-energy framework 2030 in October 2014, which sets a goal of reducing emissions - 40 percent by 2030. It also sets a goal of renewable energy sources share of up to 27 percent and an indicative goal of reducing energy consumption - 27 percent. At this moment, a process of accepting and consultations for regulatory climate-energy framework is taking part, as well as a change of greenhouse gas market directive, distribution of load on countries regulation and calculations of emissions from the sector of Land use, Land-use Change and Forestry (LULUCF) regulation. In 2015, The Ministry of Environment and Energy has started a project of making base documents for Low-Emission Development Strategy until 2030 with an outlook to 2050. This strategy applies to every economy and human activity sector and is especially linked to energy sector, industrial, transport, agriculture, forestry and waste management sectors. In the process of making base documents for the Strategy, a number of scenarios were analysed, models for simulations and optimisation were applied and an integral model for national greenhouse gas projections was developed. The Strategy outlines three scenarios: Reference scenario represents the application of the existing legislation while the other two scenarios present a transition towards low-emission economy: Gradual Transition Scenario (NU1) and Strong Transition Scenario

  19. Molecular genotyping of Colletotrichum species based on arbitrarily primed PCR, A + T-Rich DNA, and nuclear DNA analyses

    Science.gov (United States)

    Freeman, S.; Pham, M.; Rodriguez, R.J.

    1993-01-01

    Molecular genotyping of Colletotrichum species based on arbitrarily primed PCR, A + T-rich DNA, and nuclear DNA analyses. Experimental Mycology 17, 309-322. Isolates of Colletotrichum were grouped into 10 separate species based on arbitrarily primed PCR (ap-PCR), A + T-rich DNA (AT-DNA) and nuclear DNA banding patterns. In general, the grouping of Colletotrichum isolates by these molecular approaches corresponded to that done by classical taxonomic identification, however, some exceptions were observed. PCR amplification of genomic DNA using four different primers allowed for reliable differentiation between isolates of the 10 species. HaeIII digestion patterns of AT-DNA also distinguished between species of Colletotrichum by generating species-specific band patterns. In addition, hybridization of the repetitive DNA element (GcpR1) to genomic DNA identified a unique set of Pst 1-digested nuclear DNA fragments in each of the 10 species of Colletotrichum tested. Multiple isolates of C. acutatum, C. coccodes, C. fragariae, C. lindemuthianum, C. magna, C. orbiculare, C. graminicola from maize, and C. graminicola from sorghum showed 86-100% intraspecies similarity based on ap-PCR and AT-DNA analyses. Interspecies similarity determined by ap-PCR and AT-DNA analyses varied between 0 and 33%. Three distinct banding patterns were detected in isolates of C. gloeosporioides from strawberry. Similarly, three different banding patterns were observed among isolates of C. musae from diseased banana.

  20. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    Energy Technology Data Exchange (ETDEWEB)

    Milani, Gabriele, E-mail: milani@stru.polimi.it; Valente, Marco, E-mail: milani@stru.polimi.it [Department of Architecture, Built Environment and Construction Engineering (ABC), Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  1. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    International Nuclear Information System (INIS)

    Milani, Gabriele; Valente, Marco

    2014-01-01

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures

  2. Project ARES analysis of strategies of greenhouse effect gases emissions reduction. Synthesis report july 2002; Projet ARES analyse des strategies de reduction des emissions de gaz a effet de serre. Rapport de synthese juillet 2002

    Energy Technology Data Exchange (ETDEWEB)

    Criqui, P; Blanchard, O; Kitous, A [Institut d' Economie et de Politique de l' Energie, IEPE - UPR 19 du CNRS, 38 - Grenoble (France); Hourcade, J Ch; Ghersi, F [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 94 - Nogent sur Marne (France); Kousnetzoff, N; Genet, J; Fahr, St [Centre d' Etudes Prospectives et d' Informations Internationales (CEPII/CIREM), 75 - Paris (France); Soria, A; Russ, P [Institute for Prospective Technological Studies (IPTS), Seville (Spain)

    2002-07-15

    The ARES project was realized around three main activities. The first part was the elaboration by the CEPII of a scenario of a world economic growth, detailed by region for the year 2030. The second part develops by the IEPE a scenario of allocation of emission quotas for the year 2030, by a gradual reduction of the emissions growth in the developing countries, the evaluation of the scenario from the POLES model, with a comparison of the results with the alternative models described in literature or proposed by the negotiation. The last part is the extension and the development by the CIRED of the 14 zones IMACLIM model, the elaboration of interfaces with POLES and the study of the general equilibrium effects of the different attribution scenari studied by the IEPE. (A.L.B.)

  3. Calculation and decomposition of indirect carbon emissions from residential consumption in China based on the input–output model

    International Nuclear Information System (INIS)

    Zhu Qin; Peng Xizhe; Wu Kaiya

    2012-01-01

    Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.

  4. Non-localization and localization ROC analyses using clinically based scoring

    Science.gov (United States)

    Paquerault, Sophie; Samuelson, Frank W.; Myers, Kyle J.; Smith, Robert C.

    2009-02-01

    We are investigating the potential for differences in study conclusions when assessing the estimated impact of a computer-aided detection (CAD) system on readers' performance. The data utilized in this investigation were derived from a multi-reader multi-case observer study involving one hundred mammographic background images to which fixed-size and fixed-intensity Gaussian signals were added, generating a low- and high-intensity signal sets. The study setting allowed CAD assessment in two situations: when CAD sensitivity was 1) superior or 2) lower than the average reader. Seven readers were asked to review each set in the unaided and CAD-aided reading modes, mark and rate their findings. Using this data, we studied the effect on study conclusion of three clinically-based receiver operating characteristic (ROC) scoring definitions. These scoring definitions included both location-specific and non-location-specific rules. The results showed agreement in the estimated impact of CAD on the overall reader performance. In the study setting where CAD sensitivity is superior to the average reader, the mean difference in AUC between the CAD-aided read and unaided read was 0.049 (95%CIs: -0.027; 0.130) for the image scoring definition that is based on non-location-specific rules, and 0.104 (95%CIs: 0.036; 0.174) and 0.090 (95%CIs: 0.031; 0.155) for image scoring definitions that are based on location-specific rules. The increases in AUC were statistically significant for the location-specific scoring definitions. It was further observed that the variance on these estimates was reduced when using the location-specific scoring definitions compared to that using a non-location-specific scoring definition. In the study setting where CAD sensitivity is equivalent or lower than the average reader, the mean differences in AUC are slightly above 0.01 for all image scoring definitions. These increases in AUC were not statistical significant for any of the image scoring definitions

  5. Drive-based recording analyses at >800 Gfc/in2 using shingled recording

    International Nuclear Information System (INIS)

    William Cross, R.; Montemorra, Michael

    2012-01-01

    Since the introduction of perpendicular recording, conventional perpendicular scaling has enabled the hard disk drive industry to deliver products ranging from ∼130 to well over 500 Gb/in 2 in a little over 4 years. The incredible areal density growth spurt enabled by perpendicular recording is now endangered by an inability to effectively balance writeability with erasure effects at the system level. Shingled magnetic recording (SMR) offers an effective means to continue perpendicular areal density growth using conventional heads and tuned media designs. The use of specially designed edge-write head structures (also known as 'corner writers') should further increase the AD gain potential for shingled recording. In this paper, we will demonstrate the drive-based recording performance characteristics of a shingled recording system at areal densities in excess of 800 Gb/in 2 using a conventional head. Using a production drive base, developmental heads/media and a number of sophisticated analytical routines, we have studied the recording performance of a shingled magnetic recording subsystem. Our observations confirm excellent writeability in excess of 400 ktpi and a perpendicular system with acceptable noise balance, especially at extreme ID and OD skews where the benefits of SMR are quite pronounced. We believe that this demonstration illustrates that SMR is not only capable of productization, but is likely the path of least resistance toward production drive areal density closer to 1 Tb/in 2 and beyond. - Research highlights: → Drive-based recording demonstrations at 805 Gf/in 2 has been demonstrated using both 95 and 65 mm drive platforms at roughly 430 ktpi and 1.87 Mfci. → Limiting factors for shingled recording include side reading, which is dominated by the reader crosstrack skirt profile, MT10 being a representative metric. → Media jitter and associated DC media SNR further limit areal density, dominated by crosstrack transition curvature, downtrack

  6. Identification of provenance rocks based on EPMA analyses of heavy minerals

    Science.gov (United States)

    Shimizu, M.; Sano, N.; Ueki, T.; Yonaga, Y.; Yasue, K. I.; Masakazu, N.

    2017-12-01

    Information on mountain building is significant in the field of geological disposal of high-level radioactive waste, because this affects long-term stability in groundwater flow system. Provenance analysis is one of effective approaches for understanding building process of mountains. Chemical compositions of heavy minerals, as well as their chronological data, can be an index for identification of provenance rocks. The accurate identification requires the measurement of as many grains as possible. In order to achieve an efficient provenance analysis, we developed a method for quick identification of heavy minerals using an Electron Probe Micro Analyzer (EPMA). In this method, heavy mineral grains extracted from a sample were aligned on a glass slide and mounted in a resin. Concentration of 28 elements was measured for 300-500 grains per sample using EPMA. To measure as many grains as possible, we prioritized swiftness of measurement over precision, configuring measurement time of about 3.5 minutes for each grain. Identification of heavy minerals was based on their chemical composition. We developed a Microsoft® Excel® spread sheet input criteria of mineral identification using a typical range of chemical compositions for each mineral. The grains of 110 wt.% total were rejected. The criteria of mineral identification were revised through t