WorldWideScience

Sample records for rise methodology estimates

  1. Methodology for estimating human perception to tremors in high-rise buildings

    Science.gov (United States)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  2. Estimates of the Economic Effects of Sea Level Rise

    International Nuclear Information System (INIS)

    Darwin, R.F.; Tol, R.S.J.

    2001-01-01

    Regional estimates of direct cost (DC) are commonly used to measure the economic damages of sea level rise. Such estimates suffer from three limitations: (1) values of threatened endowments are not well known, (2) loss of endowments does not affect consumer prices, and (3) international trade is disregarded. Results in this paper indicate that these limitations can significantly affect economic assessments of sea level rise. Current uncertainty regarding endowment values (as reflected in two alternative data sets), for example, leads to a 17 percent difference in coastal protection, a 36 percent difference in the amount of land protected, and a 36 percent difference in DC globally. Also, global losses in equivalent variation (EV), a welfare measure that accounts for price changes, are 13 percent higher than DC estimates. Regional EV losses may be up to 10 percent lower than regional DC, however, because international trade tends to redistribute losses from regions with relatively high damages to regions with relatively low damages. 43 refs

  3. Methodology for generating waste volume estimates

    International Nuclear Information System (INIS)

    Miller, J.Q.; Hale, T.; Miller, D.

    1991-09-01

    This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs

  4. Methodology of project management at implementation of projects of high-rise construction

    Science.gov (United States)

    Papelniuk, Oksana

    2018-03-01

    High-rise construction is the perspective direction in urban development. An opportunity to arrange on rather small land plot a huge number of the living and commercial space makes high-rise construction very attractive for developers. However investment projects of high-rise buildings' construction are very expensive and complex that sets a task of effective management of such projects for the company builder. The best tool in this area today is the methodology of project management, which becomes a key factor of efficiency.

  5. Air quality estimation by computational intelligence methodologies

    Directory of Open Access Journals (Sweden)

    Ćirić Ivan T.

    2012-01-01

    Full Text Available The subject of this study is to compare different computational intelligence methodologies based on artificial neural networks used for forecasting an air quality parameter - the emission of CO2, in the city of Niš. Firstly, inputs of the CO2 emission estimator are analyzed and their measurement is explained. It is known that the traffic is the single largest emitter of CO2 in Europe. Therefore, a proper treatment of this component of pollution is very important for precise estimation of emission levels. With this in mind, measurements of traffic frequency and CO2 concentration were carried out at critical intersections in the city, as well as the monitoring of a vehicle direction at the crossroad. Finally, based on experimental data, different soft computing estimators were developed, such as feed forward neural network, recurrent neural network, and hybrid neuro-fuzzy estimator of CO2 emission levels. Test data for some characteristic cases presented at the end of the paper shows good agreement of developed estimator outputs with experimental data. Presented results are a true indicator of the implemented method usability. [Projekat Ministarstva nauke Republike Srbije, br. III42008-2/2011: Evaluation of Energy Performances and br. TR35016/2011: Indoor Environment Quality of Educational Buildings in Serbia with Impact to Health and Research of MHD Flows around the Bodies, in the Tip Clearances and Channels and Application in the MHD Pumps Development

  6. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    International Nuclear Information System (INIS)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-01-01

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  7. Methodology for uranium resource estimates and reliability

    International Nuclear Information System (INIS)

    Blanchfield, D.M.

    1980-01-01

    The NURE uranium assessment method has evolved from a small group of geologists estimating resources on a few lease blocks, to a national survey involving an interdisciplinary system consisting of the following: (1) geology and geologic analogs; (2) engineering and cost modeling; (3) mathematics and probability theory, psychology and elicitation of subjective judgments; and (4) computerized calculations, computer graphics, and data base management. The evolution has been spurred primarily by two objectives; (1) quantification of uncertainty, and (2) elimination of simplifying assumptions. This has resulted in a tremendous data-gathering effort and the involvement of hundreds of technical experts, many in uranium geology, but many from other fields as well. The rationality of the methods is still largely based on the concept of an analog and the observation that the results are reasonable. The reliability, or repeatability, of the assessments is reasonably guaranteed by the series of peer and superior technical reviews which has been formalized under the current methodology. The optimism or pessimism of individual geologists who make the initial assessments is tempered by the review process, resulting in a series of assessments which are a consistent, unbiased reflection of the facts. Despite the many improvements over past methods, several objectives for future development remain, primarily to reduce subjectively in utilizing factual information in the estimation of endowment, and to improve the recognition of cost uncertainties in the assessment of economic potential. The 1980 NURE assessment methodology will undoubtly be improved, but the reader is reminded that resource estimates are and always will be a forecast for the future

  8. Methodology for determining the investment attractiveness of construction of high-rise buildings

    Science.gov (United States)

    Nezhnikova, Ekaterina; Kashirin, Valentin; Davydova, Yana; Kazakova, Svetlana

    2018-03-01

    The article presents the analysis of the existing methods for assessing the investment attractiveness of high-rise construction. The authors determined and justified the primary choice of objects and territories that are the most attractive for the development of high-rise construction. A system of risk indicators has been developed that allow making a quantitative adjustment for a particular project in the evaluation of the efficiency of investment projects. The study is aimed at developing basic methodological concepts for a comparative evaluation of the prospects of construction of high-rise facilities that allow to take into consideration the features of investment in construction and to enable quantitative evaluation of the investment effectiveness in high-rise construction.

  9. Cost estimating for CERCLA remedial alternatives a unit cost methodology

    International Nuclear Information System (INIS)

    Brettin, R.W.; Carr, D.J.; Janke, R.J.

    1995-06-01

    The United States Environmental Protection Agency (EPA) Guidance for Conducting Remedial Investigations and Feasibility Studies Under CERCLA, Interim Final, dated October 1988 (EPA 1988) requires a detailed analysis be conducted of the most promising remedial alternatives against several evaluation criteria, including cost. To complete the detailed analysis, order-of-magnitude cost estimates (having an accuracy of +50 percent to -30 percent) must be developed for each remedial alternative. This paper presents a methodology for developing cost estimates of remedial alternatives comprised of various technology and process options with a wide range of estimated contaminated media quantities. In addition, the cost estimating methodology provides flexibility for incorporating revisions to remedial alternatives and achieves the desired range of accuracy. It is important to note that the cost estimating methodology presented here was developed as a concurrent path to the development of contaminated media quantity estimates. This methodology can be initiated before contaminated media quantities are estimated. As a result, this methodology is useful in developing cost estimates for use in screening and evaluating remedial technologies and process options. However, remedial alternative cost estimates cannot be prepared without the contaminated media quantity estimates. In the conduct of the feasibility study for Operable Unit 5 at the Fernald Environmental Management Project (FEMP), fourteen remedial alternatives were retained for detailed analysis. Each remedial alternative was composed of combinations of remedial technologies and processes which were earlier determined to be best suited for addressing the media-specific contaminants found at the FEMP site, and achieving desired remedial action objectives

  10. Microsphere estimates of blood flow: Methodological considerations

    International Nuclear Information System (INIS)

    von Ritter, C.; Hinder, R.A.; Womack, W.; Bauerfeind, P.; Fimmel, C.J.; Kvietys, P.R.; Granger, D.N.; Blum, A.L.

    1988-01-01

    The microsphere technique is a standard method for measuring blood flow in experimental animals. Sporadic reports have appeared outlining the limitations of this method. In this study the authors have systematically assessed the effect of blood withdrawals for reference sampling, microsphere numbers, and anesthesia on blood flow estimates using radioactive microspheres in dogs. Experiments were performed on 18 conscious and 12 anesthetized dogs. Four blood flow estimates were performed over 120 min using 1 x 10 6 microspheres each time. The effects of excessive numbers of microspheres pentobarbital sodium anesthesia, and replacement of volume loss for reference samples with dextran 70 were assessed. In both conscious and anesthetized dogs a progressive decrease in gastric mucosal blood flow and cardiac output was observed over 120 min. This was also observed in the pancreas in conscious dogs. The major factor responsible for these changes was the volume loss due to the reference sample withdrawals. Replacement of the withdrawn blood with dextran 70 led to stable blood flows to all organs. The injection of excessive numbers of microspheres did not modify hemodynamics to a greater extent than did the injection of 4 million microspheres. Anesthesia exerted no influence on blood flow other than raising coronary flow. The authors conclude that although blood flow to the gastric mucosa and the pancreas is sensitive to the minor hemodynamic changes associated with the microsphere technique, replacement of volume loss for reference samples ensures stable blood flow to all organs over a 120-min period

  11. Methodology for estimating sodium aerosol concentrations during breeder reactor fires

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1985-01-01

    We have devised and applied a methodology for estimating the concentration of aerosols released at building surfaces and monitored at other building surface points. We have used this methodology to make calculations that suggest, for one air-cooled breeder reactor design, cooling will not be compromised by severe liquid-metal fires

  12. Genome size estimation: a new methodology

    Science.gov (United States)

    Álvarez-Borrego, Josué; Gallardo-Escárate, Crisitian; Kober, Vitaly; López-Bonilla, Oscar

    2007-03-01

    Recently, within the cytogenetic analysis, the evolutionary relations implied in the content of nuclear DNA in plants and animals have received a great attention. The first detailed measurements of the nuclear DNA content were made in the early 40's, several years before Watson and Crick proposed the molecular structure of the DNA. In the following years Hewson Swift developed the concept of "C-value" in reference to the haploid phase of DNA in plants. Later Mirsky and Ris carried out the first systematic study of genomic size in animals, including representatives of the five super classes of vertebrates as well as of some invertebrates. From these preliminary results it became evident that the DNA content varies enormously between the species and that this variation does not bear relation to the intuitive notion from the complexity of the organism. Later, this observation was reaffirmed in the following years as the studies increased on genomic size, thus denominating to this characteristic of the organisms like the "Paradox of the C-value". Few years later along with the no-codification discovery of DNA the paradox was solved, nevertheless, numerous questions remain until nowadays unfinished, taking to denominate this type of studies like the "C-value enigma". In this study, we reported a new method for genome size estimation by quantification of fluorescence fading. We measured the fluorescence intensity each 1600 milliseconds in DAPI-stained nuclei. The estimation of the area under the graph (integral fading) during fading period was related with the genome size.

  13. New way to rise estimation objectivity of radiation consequences on human

    International Nuclear Information System (INIS)

    Akhmatullina, N. B.

    2001-01-01

    The discussion of negative consequences of radiation on human often leaves without attention the fact that the basic meaning of danger of radiation level rise in environment for human connected with genetic structure defects. Namely changes in genome lead to different negative consequences and not only accompany, but also precede them. However the tendency which appeared in our country to substitute the direct genetic analysis with references to rise of frequency of morbidity on separate nosologic groups, whose area is widen arbitrary, has brought to nonadequateness of methodologic approach, distorted the determination of 'genetic consequence' itself and as the effect of this, distorted the real estimation of consequences of Kazakh Test Sites (TS) and other sources of radiation contamination activity. The question is arising: how can we distinguish observed and discribed effects of other genotoxicants of chemical and biological origin? There are different cytogenetic methods to detect genetic damages. The more widely used - is the estimation of the chromosome anomalies frequency estimation in somatic cells, especially in lymphocytes of peripheral blood. Traditionally researches proceeds from thin mechanisms of mutagenesis, which points that radiation mutagenesis leads primarily to chromosome, and chemical - to chromatide aberrations. In radiation influence chromosome aberrations appears in nondivided lymphocytes (G1-phase) and became easily observed in first metaphase (Browen e.a.1972, Bender e.a.1966). On the contrary the aberrations, induced by chemical factors, appears primarily in the S- phase irrespectively of what is cycle's stage, when the cells were exposed. Therefore the majority of aberrations have chromatide type (Evanse e.a.1980, Preston e.a.1981). Following pointed criteria many original investigations on people exposed to radiation were carried out. Moreover it was proved the application of such method to estimate the absorbed radiation in the

  14. Methodology for Estimating Ingestion Dose for Emergency Response at SRS

    CERN Document Server

    Simpkins, A A

    2002-01-01

    At the Savannah River Site (SRS), emergency response models estimate dose for inhalation and ground shine pathways. A methodology has been developed to incorporate ingestion doses into the emergency response models. The methodology follows a two-phase approach. The first phase estimates site-specific derived response levels (DRLs) which can be compared with predicted ground-level concentrations to determine if intervention is needed to protect the public. This phase uses accepted methods with little deviation from recommended guidance. The second phase uses site-specific data to estimate a 'best estimate' dose to offsite individuals from ingestion of foodstuffs. While this method deviates from recommended guidance, it is technically defensibly and more realistic. As guidance is updated, these methods also will need to be updated.

  15. A reconciled estimate of glacier contributions to sea level rise: 2003 to 2009.

    Science.gov (United States)

    Gardner, Alex S; Moholdt, Geir; Cogley, J Graham; Wouters, Bert; Arendt, Anthony A; Wahr, John; Berthier, Etienne; Hock, Regine; Pfeffer, W Tad; Kaser, Georg; Ligtenberg, Stefan R M; Bolch, Tobias; Sharp, Martin J; Hagen, Jon Ove; van den Broeke, Michiel R; Paul, Frank

    2013-05-17

    Glaciers distinct from the Greenland and Antarctic Ice Sheets are losing large amounts of water to the world's oceans. However, estimates of their contribution to sea level rise disagree. We provide a consensus estimate by standardizing existing, and creating new, mass-budget estimates from satellite gravimetry and altimetry and from local glaciological records. In many regions, local measurements are more negative than satellite-based estimates. All regions lost mass during 2003-2009, with the largest losses from Arctic Canada, Alaska, coastal Greenland, the southern Andes, and high-mountain Asia, but there was little loss from glaciers in Antarctica. Over this period, the global mass budget was -259 ± 28 gigatons per year, equivalent to the combined loss from both ice sheets and accounting for 29 ± 13% of the observed sea level rise.

  16. Application of precursor methodology in initiating frequency estimates

    International Nuclear Information System (INIS)

    Kohut, P.; Fitzpatrick, R.G.

    1991-01-01

    The precursor methodology developed in recent years provides a consistent technique to identify important accident sequence precursors. It relies on operational events (extracting information from actual experience) and infers core damage scenarios based on expected safety system responses. The ranking or categorization of each precursor is determined by considering the full spectrum of potential core damage sequences. The methodology estimates the frequency of severe core damage based on the approach suggested by Apostolakis and Mosleh, which may lead to a potential overestimation of the severe-accident sequence frequency due to the inherent dependencies between the safety systems and the initiating events. The methodology is an encompassing attempt to incorporate most of the operating information available from nuclear power plants and is an attractive tool from the point of view of risk management. In this paper, a further extension of this methodology is discussed with regard to the treatment of initiating frequency of the accident sequences

  17. Perception-oriented methodology for robust motion estimation design

    NARCIS (Netherlands)

    Heinrich, A.; Vleuten, van der R.J.; Haan, de G.

    2014-01-01

    Optimizing a motion estimator (ME) for picture rate conversion is challenging. This is because there are many types of MEs and, within each type, many parameters, which makes subjective assessment of all the alternatives impractical. To solve this problem, we propose an automatic design methodology

  18. A simple model to estimate the impact of sea-level rise on platform beaches

    Science.gov (United States)

    Taborda, Rui; Ribeiro, Mónica Afonso

    2015-04-01

    Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.

  19. Estimating sea-level allowances for Atlantic Canada under conditions of uncertain sea-level rise

    Directory of Open Access Journals (Sweden)

    B. Greenan

    2015-03-01

    Full Text Available This paper documents the methodology of computing sea-level rise allowances for Atlantic Canada in the 21st century under conditions of uncertain sea-level rise. The sea-level rise allowances are defined as the amount by which an asset needs to be raised in order to maintain the same likelihood of future flooding events as that site has experienced in the recent past. The allowances are determined by combination of the statistics of present tides and storm surges (storm tides and the regional projections of sea-level rise and associated uncertainty. Tide-gauge data for nine sites from the Canadian Atlantic coast are used to derive the scale parameters of present sea-level extremes using the Gumbel distribution function. The allowances in the 21st century, with respect to the year 1990, were computed for the Intergovernmental Panel on Climate Change (IPCC A1FI emission scenario. For Atlantic Canada, the allowances are regionally variable and, for the period 1990–2050, range between –13 and 38 cm while, for the period 1990–2100, they range between 7 and 108 cm. The negative allowances in the northern Gulf of St. Lawrence region are caused by land uplift due to glacial isostatic adjustment (GIA.

  20. Observation-Driven Estimation of the Spatial Variability of 20th Century Sea Level Rise

    Science.gov (United States)

    Hamlington, B. D.; Burgos, A.; Thompson, P. R.; Landerer, F. W.; Piecuch, C. G.; Adhikari, S.; Caron, L.; Reager, J. T.; Ivins, E. R.

    2018-03-01

    Over the past two decades, sea level measurements made by satellites have given clear indications of both global and regional sea level rise. Numerous studies have sought to leverage the modern satellite record and available historic sea level data provided by tide gauges to estimate past sea level rise, leading to several estimates for the 20th century trend in global mean sea level in the range between 1 and 2 mm/yr. On regional scales, few attempts have been made to estimate trends over the same time period. This is due largely to the inhomogeneity and quality of the tide gauge network through the 20th century, which render commonly used reconstruction techniques inadequate. Here, a new approach is adopted, integrating data from a select set of tide gauges with prior estimates of spatial structure based on historical sea level forcing information from the major contributing processes over the past century. The resulting map of 20th century regional sea level rise is optimized to agree with the tide gauge-measured trends, and provides an indication of the likely contributions of different sources to regional patterns. Of equal importance, this study demonstrates the sensitivities of this regional trend map to current knowledge and uncertainty of the contributing processes.

  1. Methodological Framework for Estimating the Correlation Dimension in HRV Signals

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2014-01-01

    Full Text Available This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i a fast algorithm for on-line computation of correlation sums; (ii log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii three different approaches for linear region slope estimation based on latter point; and (iv exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database.

  2. Consistent estimate of ocean warming, land ice melt and sea level rise from Observations

    Science.gov (United States)

    Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel

    2016-04-01

    Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.

  3. Integrated Methodology for Estimating Water Use in Mediterranean Agricultural Areas

    Directory of Open Access Journals (Sweden)

    George C. Zalidis

    2009-08-01

    Full Text Available Agricultural use is by far the largest consumer of fresh water worldwide, especially in the Mediterranean, where it has reached unsustainable levels, thus posing a serious threat to water resources. Having a good estimate of the water used in an agricultural area would help water managers create incentives for water savings at the farmer and basin level, and meet the demands of the European Water Framework Directive. This work presents an integrated methodology for estimating water use in Mediterranean agricultural areas. It is based on well established methods of estimating the actual evapotranspiration through surface energy fluxes, customized for better performance under the Mediterranean conditions: small parcel sizes, detailed crop pattern, and lack of necessary data. The methodology has been tested and validated on the agricultural plain of the river Strimonas (Greece using a time series of Terra MODIS and Landsat 5 TM satellite images, and used to produce a seasonal water use map at a high spatial resolution. Finally, a tool has been designed to implement the methodology with a user-friendly interface, in order to facilitate its operational use.

  4. A systematic methodology to estimate added sugar content of foods.

    Science.gov (United States)

    Louie, J C Y; Moshtaghian, H; Boylan, S; Flood, V M; Rangan, A M; Barclay, A W; Brand-Miller, J C; Gill, T P

    2015-02-01

    The effect of added sugar on health is a topical area of research. However, there is currently no analytical or other method to easily distinguish between added sugars and naturally occurring sugars in foods. This study aimed to develop a systematic methodology to estimate added sugar values on the basis of analytical data and ingredients of foods. A 10-step, stepwise protocol was developed, starting with objective measures (six steps) and followed by more subjective estimation (four steps) if insufficient objective data are available. The method developed was applied to an Australian food composition database (AUSNUT2007) as an example. Out of the 3874 foods available in AUSNUT2007, 2977 foods (77%) were assigned an estimated value on the basis of objective measures (steps 1-6), and 897 (23%) were assigned a subjectively estimated value (steps 7-10). Repeatability analysis showed good repeatability for estimated values in this method. We propose that this method can be considered as a standardised approach for the estimation of added sugar content of foods to improve cross-study comparison.

  5. [Methodologies for estimating the indirect costs of traffic accidents].

    Science.gov (United States)

    Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina

    2017-01-01

    Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.

  6. Development of Cost Estimation Methodology of Decommissioning for PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il; Yoo, Yeon Jae; Lim, Yong Kyu; Chang, Hyeon Sik; Song, Geun Ho

    2013-01-01

    The permanent closure of nuclear power plant should be conducted with the strict laws and the profound planning including the cost and schedule estimation because the plant is very contaminated with the radioactivity. In Korea, there are two types of the nuclear power plant. One is the pressurized light water reactor (PWR) and the other is the pressurized heavy water reactor (PHWR) called as CANDU reactor. Also, the 50% of the operating nuclear power plant in Korea is the PWRs which were originally designed by CE (Combustion Engineering). There have been experiences about the decommissioning of Westinghouse type PWR, but are few experiences on that of CE type PWR. Therefore, the purpose of this paper is to develop the cost estimation methodology and evaluate technical level of decommissioning for the application to CE type PWR based on the system engineering technology. The aim of present study is to develop the cost estimation methodology of decommissioning for application to PWR. Through the study, the following conclusions are obtained: · Based on the system engineering, the decommissioning work can be classified as Set, Subset, Task, Subtask and Work cost units. · The Set and Task structure are grouped as 29 Sets and 15 Task s, respectively. · The final result shows the cost and project schedule for the project control and risk management. · The present results are preliminary and should be refined and improved based on the modeling and cost data reflecting available technology and current costs like labor and waste data

  7. Methodology used in IRSN nuclear accident cost estimates in France

    International Nuclear Information System (INIS)

    2015-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences. (authors)

  8. Theoretical estimation of adiabatic temperature rise from the heat flow data obtained from a reaction calorimeter

    International Nuclear Information System (INIS)

    Das, Parichay K.

    2012-01-01

    Highlights: ► This method for estimating ΔT ad (t) against time in a semi-batch reactor is distinctively pioneer and novel. ► It has established uniquely a direct correspondence between the evolution of ΔT ad (t) in RC and C A (t) in a semi-batch reactor. ► Through a unique reaction scheme, the independent effects of heat of mixing and reaction on ΔT ad (t) has been demonstrated quantitatively. ► This work will help to build a thermally safe corridor of a thermally hazard reaction. ► This manuscript, the author believes will open a new vista for further research in Adiabatic Calorimetry. - Abstract: A novel method for estimating the transient profile of adiabatic rise in temperature has been developed from the heat flow data for exothermic chemical reactions that are conducted in reaction calorimeter (RC). It has also been mathematically demonstrated by the present design that there exists a direct qualitative equivalence between the temporal evolution of the adiabatic temperature rise and the concentration of the limiting reactant for an exothermic chemical reaction, carried out in semi batch mode. The proposed procedure shows that the adiabatic temperature rise will always be less than that of the reaction executed at batch mode thereby affording a thermally safe corridor. Moreover, a unique reaction scheme has been designed to establish the independent heat effect of dissolution and reaction quantitatively. It is hoped that the testimony of the transient adiabatic temperature rise that can be prepared by the proposed method, may provide ample scope for further research.

  9. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  10. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  11. Forensic anthropology casework-essential methodological considerations in stature estimation.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Menezes, Ritesh G; Ghosh, Abhik

    2012-03-01

    The examination of skeletal remains is a challenge to the medical examiner's/coroner's office and the forensic anthropologist conducting the investigation. One of the objectives of the medico-legal investigation is to estimate stature or height from various skeletal remains and body parts brought for examination. Various skeletal remains and body parts bear a positive and linear correlation with stature and have been successfully used for stature estimation. This concept is utilized in estimation of stature in forensic anthropology casework in mass disasters and other forensic examinations. Scientists have long been involved in standardizing the anthropological data with respect to various populations of the world. This review deals with some essential methodological issues that need to be addressed in research related to estimation of stature in forensic examinations. These issues have direct relevance in the identification of commingled or unknown remains and therefore it is essential that forensic nurses are familiar with the theories and techniques used in forensic anthropology. © 2012 International Association of Forensic Nurses.

  12. A new method to estimate global mass transport and its implication for sea level rise

    Science.gov (United States)

    Yi, S.; Heki, K.

    2017-12-01

    Estimates of changes in global land mass by using GRACE observations can be achieved by two methods, a mascon method and a forward modeling method. However, results from these two methods show inconsistent secular trend. Sea level budget can be adopted to validate the consistency among observations of sea level rise by altimetry, steric change by the Argo project, and mass change by GRACE. Mascon products from JPL, GSFC and CSR are compared here, we find that all these three products cannot achieve a reconciled sea level budget, while this problem can be solved by a new forward modeling method. We further investigate the origin of this difference, and speculate that it is caused by the signal leakage from the ocean mass. Generally, it is well recognized that land signals leak into oceans, but it also happens the other way around. We stress the importance of correction of leakage from the ocean in the estimation of global land masses. Based on a reconciled sea level budget, we confirmed that global sea level rise has been accelerating significantly over 2005-2015, as a result of the ongoing global temperature increase.

  13. Rising atmospheric CO{sub 2} and crops: Research methodology and direct effects

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, H. [National Soil Dynamics Laboratory, Auburn, AL (United States); Acock, B. [Systems Research Laboratory, Beltsville, MD (United States)

    1993-12-31

    Carbon dioxide is the food of trees and grass. Our relentless pursuit of a better life has taken us down a traffic jammed road, past smoking factories and forests. This pursuit is forcing a rise in the atmospheric CO{sub 2} level, and no one know when and if flood stage will be reached. Some thinkers have suggested that this increase of CO{sub 2} in the atmosphere will cause warming. No matter whether this prediction is realized or not, more CO{sub 2} will directly affect plants. Data from controlled observations have usually, but not always, shown benefits. Our choices of scientific equipment for gathering CO{sub 2} response data are critical since we must see what is happening through the eye of the instrument. The signals derived from our sensors will ultimately determine the truth of our conclusions, conclusion which will profoundly influence our policy decisions. Experimental gear is selected on the basis of scale of interest and problem to be addressed. Our imaginations and our budgets interact to set bounds on our objectives and approaches. Techniques run the gamut from cellular microprobes through whole-plant controlled environment chambers to field-scale exposure systems. Trade-offs exist among the various CO{sub 2} exposure techniques, and many factors impinge on the choice of a method. All exposure chambers are derivatives of three primary types--batch, plug flow, and continuous stirred tank reactor. Systems for the generation of controlled test atmospheres of CO{sub 2} vary in two basic ways--size and degree of control. Among the newest is free-air CO{sub 2} enrichment which allows tens of square meters of cropland to be studied.

  14. Methodology for estimating reprocessing costs for nuclear fuels

    International Nuclear Information System (INIS)

    Carter, W.L.; Rainey, R.H.

    1980-02-01

    A technological and economic evaluation of reprocessing requirements for alternate fuel cycles requires a common assessment method and a common basis to which various cycles can be related. A methodology is described for the assessment of alternate fuel cycles utilizing a side-by-side comparison of functional flow diagrams of major areas of the reprocessing plant with corresponding diagrams of the well-developed Purex process as installed in the Barnwell Nuclear Fuel Plant (BNFP). The BNFP treats 1500 metric tons of uranium per year (MTU/yr). Complexity and capacity factors are determined for adjusting the estimated facility and equipment costs of BNFP to determine the corresponding costs for the alternate fuel cycle. Costs of capacities other than the reference 1500 MT of heavy metal per year are estimated by the use of scaling factors. Unit costs of reprocessed fuel are calculated using a discounted cash flow analysis for three economic bases to show the effect of low-risk, typical, and high-risk financing methods

  15. A Hierarchical Clustering Methodology for the Estimation of Toxicity

    Science.gov (United States)

    A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...

  16. Methodology for completing Hanford 200 Area tank waste physical/chemical profile estimations

    International Nuclear Information System (INIS)

    Kruger, A.A.

    1996-01-01

    The purpose of the Methodology for Completing Hanford 200 Area Tank Waste Physical/Chemical Profile Estimations is to capture the logic inherent to completing 200 Area waste tank physical and chemical profile estimates. Since there has been good correlation between the estimate profiles and actual conditions during sampling and sub-segment analysis, it is worthwhile to document the current estimate methodology

  17. A photogrammetric methodology for estimating construction and demolition waste composition

    International Nuclear Information System (INIS)

    Heck, H.H.; Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S.

    2002-01-01

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  18. A photogrammetric methodology for estimating construction and demolition waste composition

    Energy Technology Data Exchange (ETDEWEB)

    Heck, H.H. [Florida Inst. of Technology, Dept. of divil Engineering, Melbourne, Florida (United States); Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S

    2002-06-15

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  19. Towards a unified estimate of arctic glaciers contribution to sea level rise since 1972.

    Science.gov (United States)

    Dehecq, A.; Gardner, A. S.; Alexandrov, O.; McMichael, S.

    2017-12-01

    Glaciers retreat contributed to about 1/3 of the observed sea level rise since 1971 (IPCC). However, long term estimates of glaciers volume changes rely on sparse field observations and region-wide satellite observations are available mostly after 2000. The recently declassified images from the reconnaissance satellite series Hexagon (KH9), that acquired 6 m resolution stereoscopic images from 1971 to 1986, open new possibilities for glaciers observation. But the film-printed images represent a processing challenge. Here we present an automatic workflow developed to generate Digital Elevation Models (DEMs) at 24 m resolution from the raw scanned KH9 images. It includes a preprocessing step to detect fiducial marks and to correct distortions of the film caused by the 40-year storage. An estimate of the unknown satellite position is obtained from a crude geolocation of the images. Each stereo image pair/triplet is then processed using the NASA Ames Stereo Pipeline to derive an unscaled DEM using standard photogrammetric techniques. This DEM is finally aligned to a reference topography, to account for errors in translation, rotation and scaling. In a second part, we present DEMs generated over glaciers in the Canadian Arctic and analyze glaciers volume changes from 1970 to the more recent WorldView ArcticDEM.

  20. Are sea-level-rise trends along the coasts of the north Indian Ocean consistent with global estimates?

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.; Shankar, D.

    yielded sea-level-rise estimates between 1.06–1.75 mm/ yrear-1 , with a regional average of 1.29 mm yr-1, when corrected for global isostatic adjustment (GIA) using model data, with a regional average of 1.29 mm-1.. These estimates are consistent...

  1. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    Science.gov (United States)

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  2. Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001

    Science.gov (United States)

    L. S. Heath; R. A. Birdsey; D. W. Williams

    2002-01-01

    The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...

  3. Methodological proposals for estimating the price of climate in France

    Science.gov (United States)

    Joly, D.; Brossard, T.; Cardot, H.; Cavailhes, J.; Hilal, M.; Wavresky, P.

    2009-09-01

    identification problem, well-known in hedonic literature, is not a problem here, because climate is a non-produced good. Some explanatory variables may be endogenous; thus, we use the instrumental method. Finally, multicollinearity, detected by the condition number, occurs between climatic variables; thus we use a second estimation procedure, Partial Least Squares. The mean annual temperature has a positive significant effect on the housing price for owner occupiers: a rise of 1 °C entails an increase in housing prices of 5.9-6.2% (according to the equation and estimation method). The sign is also positive for tenants, with values between 2.5 and 3.9%, which are roughly half as much as for owner-occupiers. The effect of warmer summers (mean July temperature minus mean annual temperature) is compounded with the preceding one for single-detached houses: an extra 1 °C entails a price increase of 3.7 to 8.4% (depending on the model). This effect is insignificant for apartments. Hot summer days (more than 30 °C) have a significant effect for owner-occupiers of single-detached houses and renters of apartments. At the median point, an extra day of heat lowers the value of housing by 4.3% (owner-occupiers) or by 1% (tenants). This effect is quadratic, probably due to seaside sites where hot summers are appreciated. French households are insensitive to cold winters, either the January temperature minus the mean annual temperature or the number of coldest days (less than - 5 °C). The number of days' rain in January and July has a significant effect on real-estate values. The January sign is the expected: prices or rents fall by almost 1.2-2.3% for an extra day's rain. The number of days of rainfall in July also exerts a positive effect on the price of apartments (but not on the price of single-detached houses), indicating that households pay more for their housing (1.4 to 4.4%) for an extra summer day's rain. Rosen S., 1974. Hedonic prices and implicit markets: product differentiation

  4. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    Science.gov (United States)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  5. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  6. Assessment of compliance with regulatory requirements for a best estimate methodology for evaluation of ECCS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Un Chul; Jang, Jin Wook; Lim, Ho Gon; Jeong, Ik [Seoul National Univ., Seoul (Korea, Republic of); Sim, Suk Ku [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2000-03-15

    Best estimate methodology for evaluation of ECCS proposed by KEPCO(KREM) os using thermal-hydraulic best-estimate code and the topical report for the methodology is described that it meets the regulatory requirement of USNRC regulatory guide. In this research the assessment of compliance with regulatory guide. In this research the assessment of compliance with regulatory requirements for the methodology is performed. The state of licensing procedure of other countries and best-estimate evaluation methodologies of Europe is also investigated, The applicability of models and propriety of procedure of uncertainty analysis of KREM are appraised and compliance with USNRC regulatory guide is assessed.

  7. A Survey of Cost Estimating Methodologies for Distributed Spacecraft Missions

    Science.gov (United States)

    Foreman, Veronica L.; Le Moigne, Jacqueline; de Weck, Oliver

    2016-01-01

    Satellite constellations present unique capabilities and opportunities to Earth orbiting and near-Earth scientific and communications missions, but also present new challenges to cost estimators. An effective and adaptive cost model is essential to successful mission design and implementation, and as Distributed Spacecraft Missions (DSM) become more common, cost estimating tools must become more representative of these types of designs. Existing cost models often focus on a single spacecraft and require extensive design knowledge to produce high fidelity estimates. Previous research has examined the limitations of existing cost practices as they pertain to the early stages of mission formulation, for both individual satellites and small satellite constellations. Recommendations have been made for how to improve the cost models for individual satellites one-at-a-time, but much of the complexity in constellation and DSM cost modeling arises from constellation systems level considerations that have not yet been examined. This paper constitutes a survey of the current state-of-theart in cost estimating techniques with recommendations for improvements to increase the fidelity of future constellation cost estimates. To enable our investigation, we have developed a cost estimating tool for constellation missions. The development of this tool has revealed three high-priority shortcomings within existing parametric cost estimating capabilities as they pertain to DSM architectures: design iteration, integration and test, and mission operations. Within this paper we offer illustrative examples of these discrepancies and make preliminary recommendations for addressing them. DSM and satellite constellation missions are shifting the paradigm of space-based remote sensing, showing promise in the realms of Earth science, planetary observation, and various heliophysical applications. To fully reap the benefits of DSM technology, accurate and relevant cost estimating capabilities

  8. Methodology for estimating accidental radioactive releases in nuclear waste management

    International Nuclear Information System (INIS)

    Levy, H.B.

    1979-01-01

    Estimation of the risks of accidental radioactive releases is necessary in assessing the safety of any nuclear waste management system. The case of a radioactive waste form enclosed in a barrier system is considered. Two test calculations were carried out

  9. Application of a rising plate meter to estimate forage yield on dairy farms in Pennsylvania

    Science.gov (United States)

    Accurately assessing pasture forage yield is necessary for producers who want to budget feed expenses and make informed pasture management decisions. Clipping and weighing forage from a known area is a direct method to measure pasture forage yield, however it is time consuming. The rising plate mete...

  10. Estimating Areas of Vulnerability: Sea Level Rise and Storm Surge Hazards in the National Parks

    Science.gov (United States)

    Caffrey, M.; Beavers, R. L.; Slayton, I. A.

    2013-12-01

    The University of Colorado Boulder in collaboration with the National Park Service has undertaken the task of compiling sea level change and storm surge data for 105 coastal parks. The aim of our research is to highlight areas of the park system that are at increased risk of rapid inundation as well as periodic flooding due to sea level rise and storms. This research will assist park managers and planners in adapting to climate change. The National Park Service incorporates climate change data into many of their planning documents and is willing to implement innovative coastal adaptation strategies. Events such as Hurricane Sandy highlight how impacts of coastal hazards will continue to challenge management of natural and cultural resources and infrastructure along our coastlines. This poster will discuss the current status of this project. We discuss the impacts of Hurricane Sandy as well as the latest sea level rise and storm surge modeling being employed in this project. In addition to evaluating various drivers of relative sea-level change, we discuss how park planners and managers also need to consider projected storm surge values added to sea-level rise magnitudes, which could further complicate the management of coastal lands. Storm surges occurring at coastal parks will continue to change the land and seascapes of these areas, with the potential to completely submerge them. The likelihood of increased storm intensity added to increasing rates of sea-level rise make predicting the reach of future storm surges essential for planning and adaptation purposes. The National Park Service plays a leading role in developing innovative strategies for coastal parks to adapt to sea-level rise and storm surge, whilst coastal storms are opportunities to apply highly focused responses.

  11. Polyfactorial corruption index in the Russian regions: methodology of estimation

    Directory of Open Access Journals (Sweden)

    Elina L. Sidorenko

    2016-09-01

    Full Text Available Objective to summarize criminological social and economic indicators of development of the Russian Federation subjects to identify and assess the hidden system dependencies between social indicators and levels of corruption to define the links between individual indicators and to develop the methodology of anticorruption ranking of the regions. Methods comparison analysis synthesis mathematical modeling correlation comparisons and extrapolation. Results in the work the author describes the methodology of the complex analysis of corruption in the Russian Federation subjects and elaborates forecasts for its development short term and medium term. Scientific novelty for the first time in domestic criminology the algorithm is proposed of studying and forecasting regional corruption on the basis of polyfactorial analysis of criminological social and political indicators. For profound and comprehensive study of the regional aspects of corruption a model was developed to monitor and forecast on the basis of measuring the polyfactorial corruption index PCI. PCI consists of two groups of parameters corruption potential of the region of the country CPR and corruption risk in the region CRR. Practical significance the research results can be used in the process of developing regional strategies of corruption counteraction as well as in adjustment of the existing methods of corruption prevention.

  12. Methodology development for the radioecological monitoring effectiveness estimation

    International Nuclear Information System (INIS)

    Gusev, A.E.; Kozlov, A.A.; Lavrov, K.N.; Sobolev, I.A.; Tsyplyakova, T.P.

    1997-01-01

    A general model for estimation of the programs assuring radiation and ecological public protection is described. The complex of purposes and criteria characterizing and giving an opportunity to estimate the effectiveness of environment protection program composition is selected. An algorithm for selecting the optimal management decision from the view point of work cost connected with population protection improvement is considered. The position of radiation-ecological monitoring in general problem of environment pollution is determined. It is shown that the monitoring organizing effectiveness is closely connected with population radiation and ecological protection

  13. Qualitative and quantitative cost estimation : a methodology analysis

    NARCIS (Netherlands)

    Aram, S.; Eastman, C.; Beetz, J.; Issa, R.; Flood, I.

    2014-01-01

    This paper reports on the first part of ongoing research with the goal of designing a framework and a knowledge-based system for 3D parametric model-based quantity take-off and cost estimation in the Architecture, Engineering and Construction (AEC) industry. The authors have studied and analyzed

  14. Probabilistic methodology for estimating radiation-induced cancer risk

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Leggett, R.W.; Williams, L.R.

    1981-01-01

    The RICRAC computer code was developed at Oak Ridge National Laboratory to provide a versatile and convenient methodology for radiation risk assessment. The code allows as input essentially any dose pattern commonly encountered in risk assessments for either acute or chronic exposures, and it includes consideration of the age structure of the exposed population. Results produced by the analysis include the probability of one or more radiation-induced cancer deaths in a specified population, expected numbers of deaths, and expected years of life lost as a result of premature fatalities. These calculatons include consideration of competing risks of death from all other causes. The program also generates a probability frequency distribution of the expected number of cancers in any specified cohort resulting from a given radiation dose. The methods may be applied to any specified population and dose scenario

  15. Development of a methodology for the assessment of sea level rise impacts on Florida's transportation modes and infrastructure : [summary].

    Science.gov (United States)

    2012-01-01

    In Florida, low elevations can make transportation infrastructure in coastal and low-lying areas potentially vulnerable to sea level rise (SLR). Becuase global SLR forecasts lack precision at local or regional scales, SLR forecasts or scenarios for p...

  16. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  17. Nuclear data evaluation methodology including estimates of covariances

    Directory of Open Access Journals (Sweden)

    Smith D.L.

    2010-10-01

    Full Text Available Evaluated nuclear data rather than raw experimental and theoretical information are employed in nuclear applications such as the design of nuclear energy systems. Therefore, the process by which such information is produced and ultimately used is of critical interest to the nuclear science community. This paper provides an overview of various contemporary methods employed to generate evaluated cross sections and related physical quantities such as particle emission angular distributions and energy spectra. The emphasis here is on data associated with neutron induced reaction processes, with consideration of the uncertainties in these data, and on the more recent evaluation methods, e.g., those that are based on stochastic (Monte Carlo techniques. There is no unique way to perform such evaluations, nor are nuclear data evaluators united in their opinions as to which methods are superior to the others in various circumstances. In some cases it is not critical which approaches are used as long as there is consistency and proper use is made of the available physical information. However, in other instances there are definite advantages to using particular methods as opposed to other options. Some of these distinctions are discussed in this paper and suggestions are offered regarding fruitful areas for future research in the development of evaluation methodology.

  18. METHODOLOGY RELATED TO ESTIMATION OF INVESTMENT APPEAL OF RURAL SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    A. S. Voshev

    2010-03-01

    Full Text Available Conditions for production activity vary considerably from region to region, from area to area, from settlement to settlement. In this connection, investors are challenged to choose an optimum site for a new enterprise. To make the decision, investors follow such references as: investment potential and risk level; their interrelation determines investment appeal of a country, region, area, city or rural settlement. At present Russia faces a problem of «black boxes» represented by a lot of rural settlements. No effective and suitable techniques of quantitative estimation of investment potential, rural settlement risks and systems to make the given information accessible for potential investors exist until now.

  19. Methodology proposal for estimation of carbon storage in urban green areas

    NARCIS (Netherlands)

    Schröder, C.; Mancosu, E.; Roerink, G.J.

    2013-01-01

    Methodology proposal for estimation of carbon storage in urban green areas; final report. Subtitle: Final report of task Task 262-5-6 "Carbon sequestration in urban green infrastructure" Project manager Marie Cugny-Seguin. Date: 15-10-2013

  20. Systematic methodology for estimating direct capital costs for blanket tritium processing systems

    International Nuclear Information System (INIS)

    Finn, P.A.

    1985-01-01

    This paper describes the methodology developed for estimating the relative capital costs of blanket processing systems. The capital costs of the nine blanket concepts selected in the Blanket Comparison and Selection Study are presented and compared

  1. A Capacitance-Based Methodology for the Estimation of Piezoelectric Coefficients of Poled Piezoelectric Materials

    KAUST Repository

    Al Ahmad, Mahmoud; Alshareef, Husam N.

    2010-01-01

    A methodology is proposed to estimate the piezoelectric coefficients of bulk piezoelectric materials using simple capacitance measurements. The extracted values of d33 and d31 from the capacitance measurements were 506 pC/N and 247 p

  2. Methodological aspects of core meltdown accidents frequency estimates

    International Nuclear Information System (INIS)

    Matthis, P.

    1984-01-01

    A survey is given of the work of the ecological institute relating to models and methods used in the German Risk Study for the assessment of core meltdown accident frequency. A statistical model used by the ecological institute for the estimation of the outage behaviour of components is taken as a comparison, which leads to the conclusion that no appropriate methods for the assessment of component reliability are available to date. Furthermore, there are no secured methods for error propagation computation. The lower limits for the ranges of reliability of components are calculated by approximation. As a result of imperfect modelling and of a number of methodical inaccuracies and neglects, the German Risk Study underestimates the ranges of component reliability by a factor of 3 to 70 (depending on the type of component). (RF) [de

  3. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    International Nuclear Information System (INIS)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a open-quotes best estimateclose quotes of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards

  4. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

    DEFF Research Database (Denmark)

    Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson

    2012-01-01

    lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...

  5. An Estimator of Heavy Tail Index through the Generalized Jackknife Methodology

    Directory of Open Access Journals (Sweden)

    Weiqi Liu

    2014-01-01

    Full Text Available In practice, sometimes the data can be divided into several blocks but only a few of the largest observations within each block are available to estimate the heavy tail index. To address this problem, we propose a new class of estimators through the Generalized Jackknife methodology based on Qi’s estimator (2010. These estimators are proved to be asymptotically normal under suitable conditions. Compared to Hill’s estimator and Qi’s estimator, our new estimator has better asymptotic efficiency in terms of the minimum mean squared error, for a wide range of the second order shape parameters. For the finite samples, our new estimator still compares favorably to Hill’s estimator and Qi’s estimator, providing stable sample paths as a function of the number of dividing the sample into blocks, smaller estimation bias, and MSE.

  6. ESTIMATION OF THE TEMPERATURE RISE OF A MCU ACID STREAM PIPE IN NEAR PROXIMITY TO A SLUDGE STREAM PIPE

    International Nuclear Information System (INIS)

    Fondeur, F; Michael Poirier, M; Samuel Fink, S

    2007-01-01

    Effluent streams from the Modular Caustic-Side Solvent Extraction Unit (MCU) will transfer to the tank farms and to the Defense Waste Processing Facility (DWPF). These streams will contain entrained solvent. A significant portion of the Strip Effluent (SE) pipeline (i.e., acid stream containing Isopar(reg s ign) L residues) length is within one inch of a sludge stream. Personnel envisioned the sludge stream temperature may reach 100 C during operation. The nearby SE stream may receive heat from the sludge stream and reach temperatures that may lead to flammability issues once the contents of the SE stream discharge into a larger reservoir. To this end, personnel used correlations from the literature to estimate the maximum temperature rise the SE stream may experience if the nearby sludge stream reaches boiling temperature. Several calculation methods were used to determine the temperature rise of the SE stream. One method considered a heat balance equation under steady state that employed correlation functions to estimate heat transfer rate. This method showed the maximum temperature of the acid stream (SE) may exceed 45 C when the nearby sludge stream is 80 C or higher. A second method used an effectiveness calculation used to predict the heat transfer rate in single pass heat exchanger. By envisioning the acid and sludge pipes as a parallel flow pipe-to-pipe heat exchanger, this method provides a conservative estimation of the maximum temperature rise. Assuming the contact area (i.e., the area over which the heat transfer occurs) is the whole pipe area, the results found by this method nearly matched the results found with the previous calculation method. It is recommended that the sludge stream be maintained below 80 C to minimize a flammable vapor hazard from occurring

  7. Estimation Methodology for the Electricity Consumption with the Daylight- and Occupancy-Controlled Artificial Lighting

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Jensen, Rasmus Lund; Strømberg, Ida Kristine

    2017-01-01

    Artificial lighting represents 15-30% of the total electricity consumption in buildings in Scandinavia. It is possible to avoid a large share of electricity use for lighting by application of daylight control systems for artificial lighting. Existing methodology for estimation of electricity...... consumption with application of such control systems in Norway is based on Norwegian standard NS 3031:2014 and can only provide results from a rough estimate. This paper aims to introduce a new estimation methodology for the electricity usage with the daylight- and occupancy-controlled artificial lighting...

  8. Methodology to estimate parameters of an excitation system based on experimental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Saavedra-Montes, A.J. [Carrera 80 No 65-223, Bloque M8 oficina 113, Escuela de Mecatronica, Universidad Nacional de Colombia, Medellin (Colombia); Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Ramirez-Scarpetta, J.M. [Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Malik, O.P. [2500 University Drive N.W., Electrical and Computer Engineering Department, University of Calgary, Calgary, Alberta (Canada)

    2011-01-15

    A methodology to estimate the parameters of a potential-source controlled rectifier excitation system model is presented in this paper. The proposed parameter estimation methodology is based on the characteristics of the excitation system. A comparison of two pseudo random binary signals, two sampling periods for each one, and three estimation algorithms is also presented. Simulation results from an excitation control system model and experimental results from an excitation system of a power laboratory setup are obtained. To apply the proposed methodology, the excitation system parameters are identified at two different levels of the generator saturation curve. The results show that it is possible to estimate the parameters of the standard model of an excitation system, recording two signals and the system operating in closed loop with the generator. The normalized sum of squared error obtained with experimental data is below 10%, and with simulation data is below 5%. (author)

  9. Vertical Rise Velocity of Equatorial Plasma Bubbles Estimated from Equatorial Atmosphere Radar Observations and High-Resolution Bubble Model Simulations

    Science.gov (United States)

    Yokoyama, T.; Ajith, K. K.; Yamamoto, M.; Niranjan, K.

    2017-12-01

    Equatorial plasma bubble (EPB) is a well-known phenomenon in the equatorial ionospheric F region. As it causes severe scintillation in the amplitude and phase of radio signals, it is important to understand and forecast the occurrence of EPBs from a space weather point of view. The development of EPBs is presently believed as an evolution of the generalized Rayleigh-Taylor instability. We have already developed a 3D high-resolution bubble (HIRB) model with a grid spacing of as small as 1 km and presented nonlinear growth of EPBs which shows very turbulent internal structures such as bifurcation and pinching. As EPBs have field-aligned structures, the latitude range that is affected by EPBs depends on the apex altitude of EPBs over the dip equator. However, it was not easy to observe the apex altitude and vertical rise velocity of EPBs. Equatorial Atmosphere Radar (EAR) in Indonesia is capable of steering radar beams quickly so that the growth phase of EPBs can be captured clearly. The vertical rise velocities of the EPBs observed around the midnight hours are significantly smaller compared to those observed in postsunset hours. Further, the vertical growth of the EPBs around midnight hours ceases at relatively lower altitudes, whereas the majority of EPBs at postsunset hours found to have grown beyond the maximum detectable altitude of the EAR. The HIRB model with varying background conditions are employed to investigate the possible factors that control the vertical rise velocity and maximum attainable altitudes of EPBs. The estimated rise velocities from EAR observations at both postsunset and midnight hours are, in general, consistent with the nonlinear evolution of EPBs from the HIRB model.

  10. Tidally adjusted estimates of topographic vulnerability to sea level rise and flooding for the contiguous United States

    International Nuclear Information System (INIS)

    Strauss, Benjamin H; Ziemlinski, Remik; Weiss, Jeremy L; Overpeck, Jonathan T

    2012-01-01

    Because sea level could rise 1 m or more during the next century, it is important to understand what land, communities and assets may be most at risk from increased flooding and eventual submersion. Employing a recent high-resolution edition of the National Elevation Dataset and using VDatum, a newly available tidal model covering the contiguous US, together with data from the 2010 Census, we quantify low-lying coastal land, housing and population relative to local mean high tide levels, which range from ∼0 to 3 m in elevation (North American Vertical Datum of 1988). Previous work at regional to national scales has sometimes equated elevation with the amount of sea level rise, leading to underestimated risk anywhere where the mean high tide elevation exceeds 0 m, and compromising comparisons across regions with different tidal levels. Using our tidally adjusted approach, we estimate the contiguous US population living on land within 1 m of high tide to be 3.7 million. In 544 municipalities and 38 counties, we find that over 10% of the population lives below this line; all told, some 2150 towns and cities have some degree of exposure. At the state level, Florida, Louisiana, California, New York and New Jersey have the largest sub-meter populations. We assess topographic susceptibility of land, housing and population to sea level rise for all coastal states, counties and municipalities, from 0 to 6 m above mean high tide, and find important threat levels for widely distributed communities of every size. We estimate that over 22.9 million Americans live on land within 6 m of local mean high tide. (letter)

  11. Methodology for estimation of secondary meteorological variables to be used in local dispersion of air pollutants

    International Nuclear Information System (INIS)

    Turtos, L.; Sanchez, M.; Roque, A.; Soltura, R.

    2003-01-01

    Methodology for estimation of secondary meteorological variables to be used in local dispersion of air pollutants. This paper include the main works, carried out into the frame of the project Atmospheric environmental externalities of the electricity generation in Cuba, aiming to develop methodologies and corresponding software, which will allow to improve the quality of the secondary meteorological data used in atmospheric pollutant calculations; specifically the wind profiles coefficient, urban and rural mixed high and temperature gradients

  12. Sea level rise at Honolulu and Hilo, Hawaii: GPS estimates of differential land motion

    Science.gov (United States)

    Caccamise, Dana J.; Merrifield, Mark A.; Bevis, Michael; Foster, James; Firing, Yvonne L.; Schenewerk, Mark S.; Taylor, Frederick W.; Thomas, Donald A.

    2005-02-01

    Since 1946, sea level at Hilo on the Big Island of Hawaii has risen an average of 1.8 +/- 0.4 mm/yr faster than at Honolulu on the island of Oahu. This difference has been attributed to subsidence of the Big Island. However, GPS measurements indicate that Hilo is sinking relative to Honolulu at a rate of -0.4 +/- 0.5 mm/yr, which is too small to account for the difference in sea level trends. In the past 30 years, there has been a statistically significant reduction in the relative sea level trend. While it is possible that the rates of land motion have changed over this time period, the available hydrographic data suggest that interdecadal variations in upper ocean temperature account for much of the differential sea level signal between the two stations, including the recent trend change. These results highlight the challenges involved in estimating secular sea level trends in the presence of significant low frequency variability.

  13. Estimation of structural film viscosity based on the bubble rise method in a nanofluid.

    Science.gov (United States)

    Cho, Heon Ki; Nikolov, Alex D; Wasan, Darsh T

    2018-04-15

    When a single bubble moves at a very low capillary number (10 -7 ) through a liquid with dispersed nanoparticles (nanofluid) inside a vertical tube/capillary, a film is formed between the bubble surface and the tube wall and the nanoparticles self-layer inside the confined film. We measured the film thickness using reflected light interferometry. We calculated the film structural energy isotherm vs. the film thickness from the film-meniscus contact angle measurements using the reflected light interferometric method. Based on the experimental measurement of the film thickness and the calculated values of the film structural energy barrier, we estimated the structural film viscosity vs. the film thickness using the Frenkel approach. Because of the nanoparticle film self-layering phenomenon, we observed a gradual increase in the film viscosity with the decreasing film thickness. However, we observed a significant increase in the film viscosity accompanied by a step-wise decrease in the bubble velocity when the film thickness decreased from 3 to 2 particle layers due to the structural transition in the film. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. A Novel Methodology for Estimating State-Of-Charge of Li-Ion Batteries Using Advanced Parameters Estimation

    Directory of Open Access Journals (Sweden)

    Ibrahim M. Safwat

    2017-11-01

    Full Text Available State-of-charge (SOC estimations of Li-ion batteries have been the focus of many research studies in previous years. Many articles discussed the dynamic model’s parameters estimation of the Li-ion battery, where the fixed forgetting factor recursive least square estimation methodology is employed. However, the change rate of each parameter to reach the true value is not taken into consideration, which may tend to poor estimation. This article discusses this issue, and proposes two solutions to solve it. The first solution is the usage of a variable forgetting factor instead of a fixed one, while the second solution is defining a vector of forgetting factors, which means one factor for each parameter. After parameters estimation, a new idea is proposed to estimate state-of-charge (SOC of the Li-ion battery based on Newton’s method. Also, the error percentage and computational cost are discussed and compared with that of nonlinear Kalman filters. This methodology is applied on a 36 V 30 A Li-ion pack to validate this idea.

  15. Methodology for estimating biomass energy potential and its application to Colombia

    International Nuclear Information System (INIS)

    Gonzalez-Salazar, Miguel Angel; Morini, Mirko; Pinelli, Michele; Spina, Pier Ruggero; Venturini, Mauro; Finkenrath, Matthias; Poganietz, Witold-Roger

    2014-01-01

    Highlights: • Methodology to estimate the biomass energy potential and its uncertainty at a country level. • Harmonization of approaches and assumptions in existing assessment studies. • The theoretical and technical biomass energy potential in Colombia are estimated in 2010. - Abstract: This paper presents a methodology to estimate the biomass energy potential and its associated uncertainty at a country level when quality and availability of data are limited. The current biomass energy potential in Colombia is assessed following the proposed methodology and results are compared to existing assessment studies. The proposed methodology is a bottom-up resource-focused approach with statistical analysis that uses a Monte Carlo algorithm to stochastically estimate the theoretical and the technical biomass energy potential. The paper also includes a proposed approach to quantify uncertainty combining a probabilistic propagation of uncertainty, a sensitivity analysis and a set of disaggregated sub-models to estimate reliability of predictions and reduce the associated uncertainty. Results predict a theoretical energy potential of 0.744 EJ and a technical potential of 0.059 EJ in 2010, which might account for 1.2% of the annual primary energy production (4.93 EJ)

  16. A PROPOSED METHODOLOGY FOR ESTIMATING ECOREGIONAL VALUES FOR OUTDOOR RECREATION IN THE UNITED STATES

    OpenAIRE

    Bhat, Gajanan; Bergstrom, John C.; Bowker, James Michael; Cordell, H. Ken

    1996-01-01

    This paper provides a methodology for the estimation of recreational demand functions and values using an ecoregional approach. Ten ecoregions in the continental US were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate the recreational demand functions for activities such as motorboating and waterskiing, developed and primative camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecor...

  17. Simplified Methodology to Estimate the Maximum Liquid Helium (LHe) Cryostat Pressure from a Vacuum Jacket Failure

    Science.gov (United States)

    Ungar, Eugene K.; Richards, W. Lance

    2015-01-01

    The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact

  18. A methodology to calibrate water saturation estimated from 4D seismic data

    International Nuclear Information System (INIS)

    Davolio, Alessandra; Maschio, Célio; José Schiozer, Denis

    2014-01-01

    Time-lapse seismic data can be used to estimate saturation changes within a reservoir, which is valuable information for reservoir management as it plays an important role in updating reservoir simulation models. The process of updating reservoir properties, history matching, can incorporate estimated saturation changes qualitatively or quantitatively. For quantitative approaches, reliable information from 4D seismic data is important. This work proposes a methodology to calibrate the volume of water in the estimated saturation maps, as these maps can be wrongly estimated due to problems with seismic signals (such as noise, errors associated with data processing and resolution issues). The idea is to condition the 4D seismic data to known information provided by engineering, in this case the known amount of injected and produced water in the field. The application of the proposed methodology in an inversion process (previously published) that estimates saturation from 4D seismic data is presented, followed by a discussion concerning the use of such data in a history matching process. The methodology is applied to a synthetic dataset to validate the results, the main of which are: (1) reduction of the effects of noise and errors in the estimated saturation, yielding more reliable data to be used quantitatively or qualitatively and (2) an improvement in the properties update after using this data in a history matching procedure. (paper)

  19. Methodology for estimation of potential for solar water heating in a target area

    International Nuclear Information System (INIS)

    Pillai, Indu R.; Banerjee, Rangan

    2007-01-01

    Proper estimation of potential of any renewable energy technology is essential for planning and promotion of the technology. The methods reported in literature for estimation of potential of solar water heating in a target area are aggregate in nature. A methodology for potential estimation (technical, economic and market potential) of solar water heating in a target area is proposed in this paper. This methodology links the micro-level factors and macro-level market effects affecting the diffusion or adoption of solar water heating systems. Different sectors with end uses of low temperature hot water are considered for potential estimation. Potential is estimated at each end use point by simulation using TRNSYS taking micro-level factors. The methodology is illustrated for a synthetic area in India with an area of 2 sq. km and population of 10,000. The end use sectors considered are residential, hospitals, nursing homes and hotels. The estimated technical potential and market potential are 1700 m 2 and 350 m 2 of collector area, respectively. The annual energy savings for the technical potential in the area is estimated as 110 kW h/capita and 0.55 million-kW h/sq. km. area, with an annual average peak saving of 1 MW. The annual savings is 650-kW h per m 2 of collector area and accounts for approximately 3% of the total electricity consumption of the target area. Some of the salient features of the model are the factors considered for potential estimation; estimation of electrical usage pattern for typical day, amount of electricity savings and savings during the peak load. The framework is general and enables accurate estimation of potential of solar water heating for a city, block. Energy planners and policy makers can use this framework for tracking and promotion of diffusion of solar water heating systems. (author)

  20. Methodological framework for World Health Organization estimates of the global burden of foodborne disease

    NARCIS (Netherlands)

    B. Devleesschauwer (Brecht); J.A. Haagsma (Juanita); F.J. Angulo (Frederick); D.C. Bellinger (David); D. Cole (Dana); D. Döpfer (Dörte); A. Fazil (Aamir); E.M. Fèvre (Eric); H.J. Gibb (Herman); T. Hald (Tine); M.D. Kirk (Martyn); R.J. Lake (Robin); C. Maertens De Noordhout (Charline); C. Mathers (Colin); S.A. McDonald (Scott); S.M. Pires (Sara); N. Speybroeck (Niko); M.K. Thomas (Kate); D. Torgerson; F. Wu (Felicia); A.H. Havelaar (Arie); N. Praet (Nicolas)

    2015-01-01

    textabstractBackground: The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs). This paper describes the methodological framework developed by FERG's Computational Task Force

  1. CSA C873 Building Energy Estimation Methodology - A simplified monthly calculation for quick building optimization

    NARCIS (Netherlands)

    Legault, A.; Scott, L.; Rosemann, A.L.P.; Hopkins, M.

    2014-01-01

    CSA C873 Building Energy Estimation Methodology (BEEM) is a new series of (10) standards that is intended to simplify building energy calculations. The standard is based upon the German DIN Standard 18599 that has 8 years of proven track record and has been modified for the Canadian market. The BEEM

  2. Methodologies for estimating one-time hazardous waste generation for capacity generation for capacity assurance planning

    International Nuclear Information System (INIS)

    Tonn, B.; Hwang, Ho-Ling; Elliot, S.; Peretz, J.; Bohm, R.; Hendrucko, B.

    1994-04-01

    This report contains descriptions of methodologies to be used to estimate the one-time generation of hazardous waste associated with five different types of remediation programs: Superfund sites, RCRA Corrective Actions, Federal Facilities, Underground Storage Tanks, and State and Private Programs. Estimates of the amount of hazardous wastes generated from these sources to be shipped off-site to commercial hazardous waste treatment and disposal facilities will be made on a state by state basis for the years 1993, 1999, and 2013. In most cases, estimates will be made for the intervening years, also

  3. Development of Methodologies for the Estimation of Thermal Properties Associated with Aerospace Vehicles

    Science.gov (United States)

    Scott, Elaine P.

    1996-01-01

    A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.

  4. Estimation of CO2 emissions from China’s cement production: Methodologies and uncertainties

    International Nuclear Information System (INIS)

    Ke, Jing; McNeil, Michael; Price, Lynn; Khanna, Nina Zheng; Zhou, Nan

    2013-01-01

    In 2010, China’s cement output was 1.9 Gt, which accounted for 56% of world cement production. Total carbon dioxide (CO 2 ) emissions from Chinese cement production could therefore exceed 1.2 Gt. The magnitude of emissions from this single industrial sector in one country underscores the need to understand the uncertainty of current estimates of cement emissions in China. This paper compares several methodologies for calculating CO 2 emissions from cement production, including the three main components of emissions: direct emissions from the calcination process for clinker production, direct emissions from fossil fuel combustion and indirect emissions from electricity consumption. This paper examines in detail the differences between common methodologies for each emission component, and considers their effect on total emissions. We then evaluate the overall level of uncertainty implied by the differences among methodologies according to recommendations of the Joint Committee for Guides in Metrology. We find a relative uncertainty in China’s cement-related emissions in the range of 10 to 18%. This result highlights the importance of understanding and refining methods of estimating emissions in this important industrial sector. - Highlights: ► CO 2 emission estimates are critical given China’s cement production scale. ► Methodological differences for emission components are compared. ► Results show relative uncertainty in China’s cement-related emissions of about 10%. ► IPCC Guidelines and CSI Cement CO 2 and Energy Protocol are recommended

  5. Uterotonic use immediately following birth: using a novel methodology to estimate population coverage in four countries.

    Science.gov (United States)

    Ricca, Jim; Dwivedi, Vikas; Varallo, John; Singh, Gajendra; Pallipamula, Suranjeen Prasad; Amade, Nazir; de Luz Vaz, Maria; Bishanga, Dustan; Plotkin, Marya; Al-Makaleh, Bushra; Suhowatsky, Stephanie; Smith, Jeffrey Michael

    2015-01-22

    Postpartum hemorrhage (PPH) is the leading cause of maternal mortality in developing countries. While incidence of PPH can be dramatically reduced by uterotonic use immediately following birth (UUIFB) in both community and facility settings, national coverage estimates are rare. Most national health systems have no indicator to track this, and community-based measurements are even more scarce. To fill this information gap, a methodology for estimating national coverage for UUIFB was developed and piloted in four settings. The rapid estimation methodology consisted of convening a group of national technical experts and using the Delphi method to come to consensus on key data elements that were applied to a simple algorithm, generating a non-precise national estimate of coverage of UUIFB. Data elements needed for the calculation were the distribution of births by location and estimates of UUIFB in each of those settings, adjusted to take account of stockout rates and potency of uterotonics. This exercise was conducted in 2013 in Mozambique, Tanzania, the state of Jharkhand in India, and Yemen. Available data showed that deliveries in public health facilities account for approximately half of births in Mozambique and Tanzania, 16% in Jharkhand and 24% of births in Yemen. Significant proportions of births occur in private facilities in Jharkhand and faith-based facilities in Tanzania. Estimated uterotonic use for facility births ranged from 70 to 100%. Uterotonics are not used routinely for PPH prevention at home births in any of the settings. National UUIFB coverage estimates of all births were 43% in Mozambique, 40% in Tanzania, 44% in Jharkhand, and 14% in Yemen. This methodology for estimating coverage of UUIFB was found to be feasible and acceptable. While the exercise produces imprecise estimates whose validity cannot be assessed objectively in the absence of a gold standard estimate, stakeholders felt they were accurate enough to be actionable. The exercise

  6. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  7. Compendium of Greenhouse Gas Emissions Estimation Methodologies for the Oil and Gas Industry

    Energy Technology Data Exchange (ETDEWEB)

    Shires, T.M.; Loughran, C.J. [URS Corporation, Austin, TX (United States)

    2004-02-01

    This document is a compendium of currently recognized methods and provides details for all oil and gas industry segments to enhance consistency in emissions estimation. This Compendium aims to accomplish the following goals: Assemble an expansive collection of relevant emission factors for estimating GHG emissions, based on currently available public documents; Outline detailed procedures for conversions between different measurement unit systems, with particular emphasis on implementation of oil and gas industry standards; Provide descriptions of the multitude of oil and gas industry operations, in its various segments, and the associated emissions sources that should be considered; and Develop emission inventory examples, based on selected facilities from the various segments, to demonstrate the broad applicability of the methodologies. The overall objective of developing this document is to promote the use of consistent, standardized methodologies for estimating GHG emissions from petroleum industry operations. The resulting Compendium documents recognized calculation techniques and emission factors for estimating GHG emissions for oil and gas industry operations. These techniques cover the calculation or estimation of emissions from the full range of industry operations - from exploration and production through refining, to the marketing and distribution of products. The Compendium presents and illustrates the use of preferred and alternative calculation approaches for carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) emissions for all common emission sources, including combustion, vented, and fugitive. Decision trees are provided to guide the user in selecting an estimation technique based on considerations of materiality, data availability, and accuracy. API will provide (free of charge) a calculation tool based on the emission estimation methodologies described herein. The tool will be made available at http://ghg.api.org/.

  8. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  9. Methodologies for estimating toxicity of shoreline cleaning agents in the field

    International Nuclear Information System (INIS)

    Clayton, J.R.Jr.; Stransky, B.C.; Schwartz, M.J.; Snyder, B.J.; Lees, D.C.; Michel, J.; Reilly, T.J.

    1996-01-01

    Four methodologies that could be used in a portable kit to estimate quantitative and qualitative information regarding the toxicity of oil spill cleaning agents, were evaluated. Onshore cleaning agents (SCAs) are meant to enhance the removal of treated oil from shoreline surfaces, and should not increase adverse impacts to organisms in a treated area. Tests, therefore, should be performed with resident organisms likely to be impacted during the use of SCAs. The four methodologies were Microtox T M, fertilization success for echinoderm eggs, byssal thread attachment in mussels, and righting and water-escaping ability in periwinkle snails. Site specific variations in physical and chemical properties of the oil and SCAs were considered. Results were provided, showing all combinations of oils and SCAs. Evaluation showed that all four methodologies provided sufficient information to assist a user in deciding whether or not the use of an SCA was warranted. 33 refs., 7 tabs., 11 figs

  10. Heuristic Methodology for Estimating the Liquid Biofuel Potential of a Region

    Directory of Open Access Journals (Sweden)

    Dorel Dusmanescu

    2016-08-01

    Full Text Available This paper presents a heuristic methodology for estimating the possible variation of the liquid biofuel potential of a region, an appraisal made for a future period of time. The determination of the liquid biofuel potential has been made up either on the account of an average (constant yield of the energetic crops that were used, or on the account of a yield that varies depending on a known trend, which can be estimated through a certain method. The proposed methodology uses the variation of the yield of energetic crops over time in order to simulate a variation of the biofuel potential for a future ten year time period. This new approach to the problem of determining the liquid biofuel potential of a certain land area can be useful for investors, as it allows making a more realistic analysis of the investment risk and of the possibilities of recovering the investment. On the other hand, the presented methodology can be useful to the governmental administration in order to elaborate strategies and policies to ensure the necessity of fuels and liquid biofuels for transportation, in a certain area. Unlike current methods, which approach the problem of determining the liquid biofuel potential in a deterministic way, by using econometric methods, the proposed methodology uses heuristic reasoning schemes in order to reduce the great number of factors that actually influence the biofuel potential and which usually have unknown values.

  11. Methodological Challenges in Estimating Trends and Burden of Cardiovascular Disease in Sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Jacob K. Kariuki

    2015-01-01

    Full Text Available Background. Although 80% of the burden of cardiovascular disease (CVD is in developing countries, the 2010 global burden of disease (GBD estimates have been cited to support a premise that sub-Saharan Africa (SSA is exempt from the CVD epidemic sweeping across developing countries. The widely publicized perspective influences research priorities and resource allocation at a time when secular trends indicate a rapid increase in prevalence of CVD in SSA by 2030. Purpose. To explore methodological challenges in estimating trends and burden of CVD in SSA via appraisal of the current CVD statistics and literature. Methods. This review was guided by the Critical review methodology described by Grant and Booth. The review traces the origins and evolution of GBD metrics and then explores the methodological limitations inherent in the current GBD statistics. Articles were included based on their conceptual contribution to the existing body of knowledge on the burden of CVD in SSA. Results/Conclusion. Cognizant of the methodological challenges discussed, we caution against extrapolation of the global burden of CVD statistics in a way that underrates the actual but uncertain impact of CVD in SSA. We conclude by making a case for optimal but cost-effective surveillance and prevention of CVD in SSA.

  12. Methodological Framework for World Health Organization Estimates of the Global Burden of Foodborne Disease.

    Directory of Open Access Journals (Sweden)

    Brecht Devleesschauwer

    Full Text Available The Foodborne Disease Burden Epidemiology Reference Group (FERG was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs. This paper describes the methodological framework developed by FERG's Computational Task Force to transform epidemiological information into FBD burden estimates.The global and regional burden of 31 FBDs was quantified, along with limited estimates for 5 other FBDs, using Disability-Adjusted Life Years in a hazard- and incidence-based approach. To accomplish this task, the following workflow was defined: outline of disease models and collection of epidemiological data; design and completion of a database template; development of an imputation model; identification of disability weights; probabilistic burden assessment; and estimating the proportion of the disease burden by each hazard that is attributable to exposure by food (i.e., source attribution. All computations were performed in R and the different functions were compiled in the R package 'FERG'. Traceability and transparency were ensured by sharing results and methods in an interactive way with all FERG members throughout the process.We developed a comprehensive framework for estimating the global burden of FBDs, in which methodological simplicity and transparency were key elements. All the tools developed have been made available and can be translated into a user-friendly national toolkit for studying and monitoring food safety at the local level.

  13. Estimating the Potential Risks of Sea Level Rise for Public and Private Property Ownership, Occupation and Management

    Directory of Open Access Journals (Sweden)

    Georgia Warren-Myers

    2018-04-01

    Full Text Available The estimation of future sea level rise (SLR is a major concern for cities near coastlines and river systems. Despite this, current modelling underestimates the future risks of SLR to property. Direct risks posed to property include inundation, loss of physical property and associated economic and social costs. It is also crucial to consider the risks that emerge from scenarios after SLR. These may produce one-off or periodic events that will inflict physical, economic and social implications, and direct, indirect and consequential losses. Using a case study approach, this paper combines various forms of data to examine the implications of future SLR to further understand the potential risks. The research indicates that the financial implications for local government will be loss of rates associated with total property loss and declines in value. The challenges identified are not specific to this research. Other municipalities worldwide experience similar barriers (i.e., financial implications, coastal planning predicaments, data paucity, knowledge and capacity, and legal and political challenges. This research highlights the need for private and public stakeholders to co-develop and implement strategies to mitigate and adapt property to withstand the future challenges of climate change and SLR.

  14. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    Science.gov (United States)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort

  15. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  16. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  17. Estimation of retired mobile phones generation in China: A comparative study on methodology

    Energy Technology Data Exchange (ETDEWEB)

    Li, Bo [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Yang, Jianxin, E-mail: yangjx@rcees.ac.cn [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Lu, Bin [State Key Laboratory of Urban and Regional Ecology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Shuangqing Road 18, Haidian District, Beijing 100085 (China); Song, Xiaolong [Shanghai Cooperative Centre for WEEE Recycling, Shanghai Second Polytechnic University, Jinhai Road 2360, Pudong District, Shanghai 201209 (China)

    2015-01-15

    Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to

  18. Estimation of retired mobile phones generation in China: A comparative study on methodology

    International Nuclear Information System (INIS)

    Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong

    2015-01-01

    Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to

  19. Methodology for estimating realistic responses of buildings and components under earthquake motion and its application

    International Nuclear Information System (INIS)

    Ebisawa, Katsumi; Abe, Kiyoharu; Kohno, Kunihiko; Nakamura, Hidetaka; Itoh, Mamoru.

    1996-11-01

    Failure probabilities of buildings and components under earthquake motion are estimated as conditional probabilities that their realistic responses exceed their capacities. Two methods for estimating their failure probabilities have already been developed. One is a detailed method developed in the Seismic Safety margins Research Program of Lawrence Livermore National Laboratory in U.S.A., which is called 'SSMRP method'. The other is a simplified method proposed by Kennedy et al., which is called 'Zion method'. The Zion method is sometimes called 'response factor method'. The authors adopted the response factor method. In order to enhance the estimation accuracy of failure probabilities of buildings and components, however, a new methodology for improving the response factor method was proposed. Based on the improved method, response factors of buildings and components designed to seismic design standard in Japan were estimated, and their realistic responses were also calculated. By using their realistic responses and capacities, the failure probabilities of a reactor building and relays were estimated. In order to identify the difference between new method, SSMRP method and original response factor method, the failure probabilities were compared estimated by these three methods. A similar method of SSMRP was used instead of the original SSMRP for saving time and labor. The viewpoints for selecting the methods to estimate failure probabilities of buildings and components were also proposed. (author). 55 refs

  20. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    Science.gov (United States)

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  1. Comparison of methodologies estimating emissions of aircraft pollutants, environmental impact assessment around airports

    International Nuclear Information System (INIS)

    Kurniawan, Jermanto S.; Khardi, S.

    2011-01-01

    Air transportation growth has increased continuously over the years. The rise in air transport activity has been accompanied by an increase in the amount of energy used to provide air transportation services. It is also assumed to increase environmental impacts, in particular pollutant emissions. Traditionally, the environmental impacts of atmospheric emissions from aircraft have been addressed in two separate ways; aircraft pollutant emissions occurring during the landing and take-off (LTO) phase (local pollutant emissions) which is the focus of this study, and the non-LTO phase (global/regional pollutant emissions). Aircraft pollutant emissions are an important source of pollution and directly or indirectly harmfully affect human health, ecosystems and cultural heritage. There are many methods to asses pollutant emissions used by various countries. However, using different and separate methodology will cause a variation in results, some lack of information and the use of certain methods will require justification and reliability that must be demonstrated and proven. In relation to this issue, this paper presents identification, comparison and reviews of some of the methodologies of aircraft pollutant assessment from the past, present and future expectations of some studies and projects focusing on emissions factors, fuel consumption, and uncertainty. This paper also provides reliable information on the impacts of aircraft pollutant emissions in short term and long term predictions.

  2. Best-estimate methodology for analysis of anticipated transients without scram in pressurized water reactors

    International Nuclear Information System (INIS)

    Rebollo, L.

    1993-01-01

    Union Fenosa, a utility company in Spain, has performed research on pressurized water reactor (PWR) safety with respect to the development of a best-estimate methodology for the analysis of anticipated transients without scram (ATWS), i.e., those anticipated transients for which failure of the reactor protection system is postulated. A scientific and technical approach is adopted with respect to the ATWS phenomenon as it affects a PWR, specifically the Zorita nuclear power plant, a single-loop Westinghouse-designed PWR in Spain. In this respect, an ATWS sequence analysis methodology based on published codes that is generically applicable to any PWR is proposed, which covers all the anticipated phenomena and defines the applicable acceptance criteria. The areas contemplated are cell neutron analysis, core thermal hydraulics, and plant dynamics, which are developed, qualified, and plant dynamics, which are developed, qualified, and validated by comparison with reference calculations and measurements obtained from integral or separate-effects tests

  3. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  4. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    Science.gov (United States)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  5. Estimating small area health-related characteristics of populations: a methodological review

    Directory of Open Access Journals (Sweden)

    Azizur Rahman

    2017-05-01

    Full Text Available Estimation of health-related characteristics at a fine local geographic level is vital for effective health promotion programmes, provision of better health services and population-specific health planning and management. Lack of a micro-dataset readily available for attributes of individuals at small areas negatively impacts the ability of local and national agencies to manage serious health issues and related risks in the community. A solution to this challenge would be to develop a method that simulates reliable small-area statistics. This paper provides a significant appraisal of the methodologies for estimating health-related characteristics of populations at geographical limited areas. Findings reveal that a range of methodologies are in use, which can be classified as three distinct set of approaches: i indirect standardisation and individual level modelling; ii multilevel statistical modelling; and iii micro-simulation modelling. Although each approach has its own strengths and weaknesses, it appears that microsimulation- based spatial models have significant robustness over the other methods and also represent a more precise means of estimating health-related population characteristics over small areas.

  6. Methodology applied by IRSN for nuclear accident cost estimations in France

    International Nuclear Information System (INIS)

    2013-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences

  7. Methodology to estimate the cost of the severe accidents risk / maximum benefit

    International Nuclear Information System (INIS)

    Mendoza, G.; Flores, R. M.; Vega, E.

    2016-09-01

    For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)

  8. Methodology development for estimating support behavior of spacer grid spring in core

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kang, Heung Seok; Kim, Hyung Kyu; Song, Kee Nam

    1998-04-01

    The fuel rod (FR) support behavior is changed during operation resulting from effects such as clad creep-down, spring force relaxation due to irradiation, and irradiation growth of spacer straps in accordance with time or increase of burnup. The FR support behavior is closely associated with time or increase of burnup. The FR support behavior is closely associated with FR damage due to fretting, therefore the analysis on the FR support behavior is normally required to minimize the damage. The characteristics of the parameters, which affect the FR support behavior, and the methodology developed for estimating the FR support behavior in the reactor core are described in this work. The FR support condition for the KOFA (KOrean Fuel Assembly) fuel has been analyzed by this method, and the results of the analysis show that the fuel failure due to the fuel rod fretting wear is closely related to the support behavior of FR in the core. Therefore, the present methodology for estimating the FR support condition seems to be useful for estimating the actual FR support condition. In addition, the optimization seems to be a reliable tool for establishing the optimal support condition on the basis of these results. (author). 15 refs., 3 tabs., 26 figs

  9. A Methodology for Estimating Large-Customer Demand Response MarketPotential

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles; Hopper, Nicole; Bharvirkar, Ranjit; Neenan,Bernie; Cappers,Peter

    2007-08-01

    Demand response (DR) is increasingly recognized as an essential ingredient to well-functioning electricity markets. DR market potential studies can answer questions about the amount of DR available in a given area and from which market segments. Several recent DR market potential studies have been conducted, most adapting techniques used to estimate energy-efficiency (EE) potential. In this scoping study, we: reviewed and categorized seven recent DR market potential studies; recommended a methodology for estimating DR market potential for large, non-residential utility customers that uses price elasticities to account for behavior and prices; compiled participation rates and elasticity values from six DR options offered to large customers in recent years, and demonstrated our recommended methodology with large customer market potential scenarios at an illustrative Northeastern utility. We observe that EE and DR have several important differences that argue for an elasticity approach for large-customer DR options that rely on customer-initiated response to prices, rather than the engineering approaches typical of EE potential studies. Base-case estimates suggest that offering DR options to large, non-residential customers results in 1-3% reductions in their class peak demand in response to prices or incentive payments of $500/MWh. Participation rates (i.e., enrollment in voluntary DR programs or acceptance of default hourly pricing) have the greatest influence on DR impacts of all factors studied, yet are the least well understood. Elasticity refinements to reflect the impact of enabling technologies and response at high prices provide more accurate market potential estimates, particularly when arc elasticities (rather than substitution elasticities) are estimated.

  10. Estimating the Greenland ice sheet surface mass balance contribution to future sea level rise using the regional atmospheric climate model MAR

    NARCIS (Netherlands)

    Fettweis, X.; Franco, B.; Tedesco, M.; van Angelen, J.H.; Lenaerts, J.T.M.; van den Broeke, M.R.; Gallée, H.

    2013-01-01

    To estimate the sea level rise (SLR) originating from changes in surface mass balance (SMB) of the Greenland ice sheet (GrIS), we present 21st century climate projections obtained with the regional climate model MAR (Mod`ele Atmosph´erique R´egional), forced by output of three CMIP5 (Coupled Model

  11. A regressive methodology for estimating missing data in rainfall daily time series

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to

  12. Estimation of retired mobile phones generation in China: A comparative study on methodology.

    Science.gov (United States)

    Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong

    2015-01-01

    Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales&new method is in the highest priority in estimation of the retired mobile phones. The result of sales&new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to improve generation estimation of retired mobile phones and other WEEE. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Combined methodology for estimating dose rates and health effects from exposure to radioactive pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Leggett, R.W.; Yalcintas, M.G.

    1980-12-01

    The work described in the report is basically a synthesis of two previously existing computer codes: INREM II, developed at the Oak Ridge National Laboratory (ORNL); and CAIRD, developed by the Environmental Protection Agency (EPA). The INREM II code uses contemporary dosimetric methods to estimate doses to specified reference organs due to inhalation or ingestion of a radionuclide. The CAIRD code employs actuarial life tables to account for competing risks in estimating numbers of health effects resulting from exposure of a cohort to some incremental risk. The combined computer code, referred to as RADRISK, estimates numbers of health effects in a hypothetical cohort of 100,000 persons due to continuous lifetime inhalation or ingestion of a radionuclide. Also briefly discussed in this report is a method of estimating numbers of health effects in a hypothetical cohort due to continuous lifetime exposure to external radiation. This method employs the CAIRD methodology together with dose conversion factors generated by the computer code DOSFACTER, developed at ORNL; these dose conversion factors are used to estimate dose rates to persons due to radionuclides in the air or on the ground surface. The combination of the life table and dosimetric guidelines for the release of radioactive pollutants to the atmosphere, as required by the Clean Air Act Amendments of 1977.

  14. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  15. Estimating the potential impacts of a nuclear reactor accident: methodology and case studies

    International Nuclear Information System (INIS)

    Cartwright, J.V.; Beemiller, R.M.; Trott, E.A. Jr.; Younger, J.M.

    1982-04-01

    This monograph describes an industrial impact model that can be used to estimate the regional industry-specific impacts of disasters. Special attention is given to the impacts of possible nuclear reactor accidents. The monograph also presents three applications of the model. The impacts estimated in the case studies are based on (1) general information and reactor-specific data, supplied by the US Nuclear Regulatory Commission (NRC), (2) regional economic models derived from the Regional Input-Output Modeling System (RIMS II) developed at the Bureau of Economic Analysis (BEA), and (3) additional methodology developed especially for taking into account the unique characteristics of a nuclear reactor accident with respect to regional industrial activity

  16. A Capacitance-Based Methodology for the Estimation of Piezoelectric Coefficients of Poled Piezoelectric Materials

    KAUST Repository

    Al Ahmad, Mahmoud

    2010-10-04

    A methodology is proposed to estimate the piezoelectric coefficients of bulk piezoelectric materials using simple capacitance measurements. The extracted values of d33 and d31 from the capacitance measurements were 506 pC/N and 247 pC/N, respectively. The d33 value is in agreement with that obtained from the Berlincourt method, which gave a d33 value of 500 pC/N. In addition, the d31 value is in agreement with the value obtained from the optical method, which gave a d 31 value of 223 pC/V. These results suggest that the proposed method is a viable way to quickly estimate piezoelectric coefficients of bulk unclamped samples. © 2010 The Electrochemical Society.

  17. Prototype application of best estimate and uncertainty safety analysis methodology to large LOCA analysis

    International Nuclear Information System (INIS)

    Luxat, J.C.; Huget, R.G.

    2001-01-01

    Development of a methodology to perform best estimate and uncertainty nuclear safety analysis has been underway at Ontario Power Generation for the past two and one half years. A key driver for the methodology development, and one of the major challenges faced, is the need to re-establish demonstrated safety margins that have progressively been undermined through excessive and compounding conservatism in deterministic analyses. The major focus of the prototyping applications was to quantify the safety margins that exist at the probable range of high power operating conditions, rather than the highly improbable operating states associated with Limit of the Envelope (LOE) assumptions. In LOE, all parameters of significance to the consequences of a postulated accident are assumed to simultaneously deviate to their limiting values. Another equally important objective of the prototyping was to demonstrate the feasibility of conducting safety analysis as an incremental analysis activity, as opposed to a major re-analysis activity. The prototype analysis solely employed prior analyses of Bruce B large break LOCA events - no new computer simulations were undertaken. This is a significant and novel feature of the prototyping work. This methodology framework has been applied to a postulated large break LOCA in a Bruce generating unit on a prototype basis. This paper presents results of the application. (author)

  18. Wind turbine power coefficient estimation by soft computing methodologies: Comparative study

    International Nuclear Information System (INIS)

    Shamshirband, Shahaboddin; Petković, Dalibor; Saboohi, Hadi; Anuar, Nor Badrul; Inayat, Irum; Akib, Shatirah; Ćojbašić, Žarko; Nikolić, Vlastimir; Mat Kiah, Miss Laiha; Gani, Abdullah

    2014-01-01

    Highlights: • Variable speed operation of wind turbine to increase power generation. • Changeability and fluctuation of wind has to be accounted. • To build an effective prediction model of wind turbine power coefficient. • The impact of the variation in the blade pitch angle and tip speed ratio. • Support vector regression methodology application as predictive methodology. - Abstract: Wind energy has become a large contender of traditional fossil fuel energy, particularly with the successful operation of multi-megawatt sized wind turbines. However, reasonable wind speed is not adequately sustainable everywhere to build an economical wind farm. In wind energy conversion systems, one of the operational problems is the changeability and fluctuation of wind. In most cases, wind speed can vacillate rapidly. Hence, quality of produced energy becomes an important problem in wind energy conversion plants. Several control techniques have been applied to improve the quality of power generated from wind turbines. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of support vector regression (SVR) to estimate optimal power coefficient value of the wind turbines. Instead of minimizing the observed training error, SVR p oly and SVR r bf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR approach in compare to other soft computing methodologies

  19. Integrated cost estimation methodology to support high-performance building design

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Prasad; Greden, Lara; Eijadi, David; McDougall, Tom [The Weidt Group, Minnetonka (United States); Cole, Ray [Axiom Engineers, Monterey (United States)

    2007-07-01

    Design teams evaluating the performance of energy conservation measures (ECMs) calculate energy savings rigorously with established modelling protocols, accounting for the interaction between various measures. However, incremental cost calculations do not have a similar rigor. Often there is no recognition of cost reductions with integrated design, nor is there assessment of cost interactions amongst measures. This lack of rigor feeds the notion that high-performance buildings cost more, creating a barrier for design teams pursuing aggressive high-performance outcomes. This study proposes an alternative integrated methodology to arrive at a lower perceived incremental cost for improved energy performance. The methodology is based on the use of energy simulations as means towards integrated design and cost estimation. Various points along the spectrum of integration are identified and characterized by the amount of design effort invested, the scheduling of effort, and relative energy performance of the resultant design. It includes a study of the interactions between building system parameters as they relate to capital costs. Several cost interactions amongst energy measures are found to be significant.The value of this approach is demonstrated with alternatives in a case study that shows the differences between perceived costs for energy measures along various points on the integration spectrum. These alternatives show design tradeoffs and identify how decisions would have been different with a standard costing approach. Areas of further research to make the methodology more robust are identified. Policy measures to encourage the integrated approach and reduce the barriers towards improved energy performance are discussed.

  20. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  1. Validating alternative methodologies to estimate the hydrological regime of temporary streams when flow data are unavailable

    Science.gov (United States)

    Llorens, Pilar; Gallart, Francesc; Latron, Jérôme; Cid, Núria; Rieradevall, Maria; Prat, Narcís

    2016-04-01

    ) were examined. In this case, flow permanence metrics were estimated as the proportion of photographs presenting stream flow. Results indicate that for streams being more than 25% of the time dry, interviews systematically underestimated flow, but the qualitative information given by inhabitants was of great interest to understand river dynamics. On the other hand, the use of aerial photographs gave a good estimation of flow permanence, but the seasonality was conditioned to the capture date of the aerial photographs. For these reasons, we recommend to use both methodologies together.

  2. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  3. Automated methodology for estimating waste streams generated from decommissioning contaminated facilities

    International Nuclear Information System (INIS)

    Toth, J.J.; King, D.A.; Humphreys, K.K.; Haffner, D.R.

    1994-01-01

    As part of the DOE Programmatic Environmental Impact Statement (PEIS), a viable way to determine aggregate waste volumes, cost, and direct labor hours for decommissioning and decontaminating facilities is required. In this paper, a methodology is provided for determining waste streams, cost and direct labor hours from remediation of contaminated facilities. The method is developed utilizing U.S. facility remediation data and information from several decommissioning programs, including reactor decommissioning projects. The method provides for rapid, consistent analysis for many facility types. Three remediation scenarios are considered for facility D ampersand D: unrestricted land use, semi-restricted land use, and restricted land use. Unrestricted land use involves removing radioactive components, decontaminating the building surfaces, and demolishing the remaining structure. Semi-restricted land use involves removing transuranic contamination and immobilizing the contamination on-site. Restricted land use involves removing the transuranic contamination and leaving the building standing. In both semi-restricted and restricted land use scenarios, verification of containment with environmental monitoring is required. To use the methodology, facilities are placed in a building category depending upon the level of contamination, construction design, and function of the building. Unit volume and unit area waste generation factors are used to calculate waste volumes and estimate the amount of waste generated in each of the following classifications: low-level, transuranic, and hazardous waste. Unit factors for cost and labor hours are also applied to the result to estimate D ampersand D cost and labor hours

  4. A Methodology of Health Effects Estimation from Air Pollution in Large Asian Cities

    Directory of Open Access Journals (Sweden)

    Keiko Hirota

    2017-09-01

    Full Text Available The increase of health effects caused by air pollution seems to be a growing concern in Asian cities with increasing motorization. This paper discusses methods of estimating the health effects of air pollution in large Asian cities. Due to the absence of statistical data in Asia, this paper carefully chooses the methodology using data of the Japanese compensation system. A basic idea of health effects will be captured from simple indicators, such as population and air quality, in a correlation model. This correlation model enables more estimation results of respiratory mortality caused by air pollution to be yielded than by using the relative model. The correlation model could be an alternative method to estimate mortality besides the relative risk model since the results of the correlation model are comparable with those of the relative model by city and by time series. The classification of respiratory diseases is not known from the statistical yearbooks in many countries. Estimation results could support policy decision-making with respect to public health in a cost-effective way.

  5. Methodology for uncertainty estimation of Hanford tank chemical and radionuclide inventories and concentrations

    International Nuclear Information System (INIS)

    Chen, G.; Ferryman, T.A.; Remund, K.M.

    1998-02-01

    The exact physical and chemical nature of 55 million gallons of toxic waste held in 177 underground waste tanks at the Hanford Site is not known with sufficient detail to support the safety, retrieval, and immobilization missions presented to Hanford. The Hanford Best Basis team has made point estimates of the inventories in each tank. The purpose of this study is to estimate probability distributions for each of the 71 analytes and 177 tanks that the Hanford Best Basis team has made point estimates for. This will enable uncertainty intervals to be calculated for the Best Basis inventories and should facilitate the safety, retrieval, and immobilization missions. Section 2 of this document describes the overall approach used to estimate tank inventory uncertainties. Three major components are considered in this approach: chemical concentration, density, and waste volume. Section 2 also describes the two different methods used to evaluate the tank wastes in terms of sludges and in terms of supernatant or saltcakes. Sections 3 and 4 describe in detail the methodology to assess the probability distributions for each of the three components, as well as the data sources for implementation. The conclusions are given in Section 5

  6. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications.

    Science.gov (United States)

    van Wesenbeeck, Cornelia F A; Keyzer, Michiel A; Nubé, Maarten

    2009-06-27

    As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA) for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole). Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO) in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa) to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys) by comparing them over time and with other available data sources for various countries. We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies that agricultural production and hence income must also

  7. Estimation of undernutrition and mean calorie intake in Africa: methodology, findings and implications

    Directory of Open Access Journals (Sweden)

    Nubé Maarten

    2009-06-01

    Full Text Available Abstract Background As poverty and hunger are basic yardsticks of underdevelopment and destitution, the need for reliable statistics in this domain is self-evident. While the measurement of poverty through surveys is relatively well documented in the literature, for hunger, information is much scarcer, particularly for adults, and very different methodologies are applied for children and adults. Our paper seeks to improve on this practice in two ways. One is that we estimate the prevalence of undernutrition in sub-Saharan Africa (SSA for both children and adults based on anthropometric data available at province or district level, and secondly, we estimate the mean calorie intake and implied calorie gap for SSA, also using anthropometric data on the same geographical aggregation level. Results Our main results are, first, that we find a much lower prevalence of hunger than presented in the Millennium Development reports (17.3% against 27.8% for the continent as a whole. Secondly, we find that there is much less spread in mean calorie intake across the continent than reported by the Food and Agricultural Organization (FAO in the State of Food and Agriculture, 2007, the only estimate that covers the whole of Africa. While FAO estimates for calorie availability vary from a low of 1760 Kcal/capita/day for Central Africa to a high of 2825 Kcal/capita/day for Southern Africa, our estimates lay in a range of 2245 Kcal/capita/day (Eastern Africa to 2618 Kcal/capita/day for Southern Africa. Thirdly, we validate the main data sources used (the Demographic and Health Surveys by comparing them over time and with other available data sources for various countries. Conclusion We conclude that the picture of Africa that emerges from anthropometric data is much less negative than that usually presented. Especially for Eastern and Central Africa, the nutritional status is less critical than commonly assumed and also mean calorie intake is higher, which implies

  8. Evaluating the effects of dam breach methodologies on Consequence Estimation through Sensitivity Analysis

    Science.gov (United States)

    Kalyanapu, A. J.; Thames, B. A.

    2013-12-01

    Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the

  9. A methodology for estimating health benefits of electricity generation using renewable technologies.

    Science.gov (United States)

    Partridge, Ian; Gamkhar, Shama

    2012-02-01

    At Copenhagen, the developed countries agreed to provide up to $100 bn per year to finance climate change mitigation and adaptation by developing countries. Projects aimed at cutting greenhouse gas (GHG) emissions will need to be evaluated against dual criteria: from the viewpoint of the developed countries they must cut emissions of GHGs at reasonable cost, while host countries will assess their contribution to development, or simply their overall economic benefits. Co-benefits of some types of project will also be of interest to host countries: for example some projects will contribute to reducing air pollution, thus improving the health of the local population. This paper uses a simple damage function methodology to quantify some of the health co-benefits of replacing coal-fired generation with wind or small hydro in China. We estimate the monetary value of these co-benefits and find that it is probably small compared to the added costs. We have not made a full cost-benefit analysis of renewable energy in China as some likely co-benefits are omitted from our calculations. Our results are subject to considerable uncertainty however, after careful consideration of their likely accuracy and comparisons with other studies, we believe that they provide a good first cut estimate of co-benefits and are sufficiently robust to stand as a guide for policy makers. In addition to these empirical results, a key contribution made by the paper is to demonstrate a simple and reasonably accurate methodology for health benefits estimation that applies the most recent academic research in the field to the solution of an increasingly important problem. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Methodology to estimate particulate matter emissions from certified commercial aircraft engines.

    Science.gov (United States)

    Wayson, Roger L; Fleming, Gregg G; Lovinelli, Ralph

    2009-01-01

    Today, about one-fourth of U.S. commercial service airports, including 41 of the busiest 50, are either in nonattainment or maintenance areas per the National Ambient Air Quality Standards. U.S. aviation activity is forecasted to triple by 2025, while at the same time, the U.S. Environmental Protection Agency (EPA) is evaluating stricter particulate matter (PM) standards on the basis of documented human health and welfare impacts. Stricter federal standards are expected to impede capacity and limit aviation growth if regulatory mandated emission reductions occur as for other non-aviation sources (i.e., automobiles, power plants, etc.). In addition, strong interest exists as to the role aviation emissions play in air quality and climate change issues. These reasons underpin the need to quantify and understand PM emissions from certified commercial aircraft engines, which has led to the need for a methodology to predict these emissions. Standardized sampling techniques to measure volatile and nonvolatile PM emissions from aircraft engines do not exist. As such, a first-order approximation (FOA) was derived to fill this need based on available information. FOA1.0 only allowed prediction of nonvolatile PM. FOA2.0 was a change to include volatile PM emissions on the basis of the ratio of nonvolatile to volatile emissions. Recent collaborative efforts by industry (manufacturers and airlines), research establishments, and regulators have begun to provide further insight into the estimation of the PM emissions. The resultant PM measurement datasets are being analyzed to refine sampling techniques and progress towards standardized PM measurements. These preliminary measurement datasets also support the continued refinement of the FOA methodology. FOA3.0 disaggregated the prediction techniques to allow for independent prediction of nonvolatile and volatile emissions on a more theoretical basis. The Committee for Aviation Environmental Protection of the International Civil

  11. Binational Arsenic Exposure Survey: Methodology and Estimated Arsenic Intake from Drinking Water and Urinary Arsenic Concentrations

    Directory of Open Access Journals (Sweden)

    Robin B. Harris

    2012-03-01

    Full Text Available The Binational Arsenic Exposure Survey (BAsES was designed to evaluate probable arsenic exposures in selected areas of southern Arizona and northern Mexico, two regions with known elevated levels of arsenic in groundwater reserves. This paper describes the methodology of BAsES and the relationship between estimated arsenic intake from beverages and arsenic output in urine. Households from eight communities were selected for their varying groundwater arsenic concentrations in Arizona, USA and Sonora, Mexico. Adults responded to questionnaires and provided dietary information. A first morning urine void and water from all household drinking sources were collected. Associations between urinary arsenic concentration (total, organic, inorganic and estimated level of arsenic consumed from water and other beverages were evaluated through crude associations and by random effects models. Median estimated total arsenic intake from beverages among participants from Arizona communities ranged from 1.7 to 14.1 µg/day compared to 0.6 to 3.4 µg/day among those from Mexico communities. In contrast, median urinary inorganic arsenic concentrations were greatest among participants from Hermosillo, Mexico (6.2 µg/L whereas a high of 2.0 µg/L was found among participants from Ajo, Arizona. Estimated arsenic intake from drinking water was associated with urinary total arsenic concentration (p < 0.001, urinary inorganic arsenic concentration (p < 0.001, and urinary sum of species (p < 0.001. Urinary arsenic concentrations increased between 7% and 12% for each one percent increase in arsenic consumed from drinking water. Variability in arsenic intake from beverages and urinary arsenic output yielded counter intuitive results. Estimated intake of arsenic from all beverages was greatest among Arizonans yet participants in Mexico had higher urinary total and inorganic arsenic concentrations. Other contributors to urinary arsenic concentrations should be evaluated.

  12. Temperature-based estimation of global solar radiation using soft computing methodologies

    Science.gov (United States)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  13. A methodology to estimate earthquake effects on fractures intersecting canister holes

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, P.; Wallmann, P.; Thomas, A.; Follin, S. [Golder Assocites Inc. (Sweden)

    1997-03-01

    A literature review and a preliminary numerical modeling study were carried out to develop and demonstrate a method for estimating displacements on fractures near to or intersecting canister emplacement holes. The method can be applied during preliminary evaluation of candidate sites prior to any detailed drilling or underground excavation, utilizing lineament maps and published regression relations between surface rupture trace length and earthquake magnitude, rupture area and displacements. The calculated displacements can be applied to lineament traces which are assumed to be faults and may be the sites for future earthquakes. Next, a discrete fracture model is created for secondary faulting and jointing in the vicinity of the repository. These secondary fractures may displace due to the earthquake on the primary faults. The three-dimensional numerical model assumes linear elasticity and linear elastic fracture mechanics which provides a conservative displacement estimate, while still preserving realistic fracture patterns. Two series of numerical studies were undertaken to demonstrate how the methodology could be implemented and how results could be applied to questions regarding site selection and performance assessment. The first series illustrates how earthquake damage to a hypothetical repository for a specified location (Aespoe) could be estimated. A second series examined the displacements induced by earthquakes varying in magnitude from 6.0 to 8.2 as a function of how close the earthquake was in relation to the repository. 143 refs, 25 figs, 7 tabs.

  14. A methodology to estimate earthquake effects on fractures intersecting canister holes

    International Nuclear Information System (INIS)

    La Pointe, P.; Wallmann, P.; Thomas, A.; Follin, S.

    1997-03-01

    A literature review and a preliminary numerical modeling study were carried out to develop and demonstrate a method for estimating displacements on fractures near to or intersecting canister emplacement holes. The method can be applied during preliminary evaluation of candidate sites prior to any detailed drilling or underground excavation, utilizing lineament maps and published regression relations between surface rupture trace length and earthquake magnitude, rupture area and displacements. The calculated displacements can be applied to lineament traces which are assumed to be faults and may be the sites for future earthquakes. Next, a discrete fracture model is created for secondary faulting and jointing in the vicinity of the repository. These secondary fractures may displace due to the earthquake on the primary faults. The three-dimensional numerical model assumes linear elasticity and linear elastic fracture mechanics which provides a conservative displacement estimate, while still preserving realistic fracture patterns. Two series of numerical studies were undertaken to demonstrate how the methodology could be implemented and how results could be applied to questions regarding site selection and performance assessment. The first series illustrates how earthquake damage to a hypothetical repository for a specified location (Aespoe) could be estimated. A second series examined the displacements induced by earthquakes varying in magnitude from 6.0 to 8.2 as a function of how close the earthquake was in relation to the repository. 143 refs, 25 figs, 7 tabs

  15. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  16. Dasymetric high resolution population distribution estimates for improved decision making, with a case study of sea-level rise vulnerability in Boca Raton, Florida

    Science.gov (United States)

    Ziegler, Hannes Moritz

    Planners and managers often rely on coarse population distribution data from the census for addressing various social, economic, and environmental problems. In the analysis of physical vulnerabilities to sea-level rise, census units such as blocks or block groups are coarse relative to the required decision-making application. This study explores the benefits offered from integrating image classification and dasymetric mapping at the household level to provide detailed small area population estimates at the scale of residential buildings. In a case study of Boca Raton, FL, a sea-level rise inundation grid based on mapping methods by NOAA is overlaid on the highly detailed population distribution data to identify vulnerable residences and estimate population displacement. The enhanced spatial detail offered through this method has the potential to better guide targeted strategies for future development, mitigation, and adaptation efforts.

  17. A methodology for the estimation of the radiological consequences of a Loss of Coolant Accident

    Energy Technology Data Exchange (ETDEWEB)

    Kereszturi, Andras; Brolly, Aron; Panka, Istvan; Pazmandi, Tamas; Trosztel, Istvan [Hungarian Academy of Sciences, Budapest (Hungary). MTA EK, Centre for Energy Research

    2017-09-15

    For calculation of the radiological consequences of Large Break Loss of Coolant (LBLOCA) events, a set of various computer codes modeling the corresponding physical processes, disciplines and their appropriate subsequent data exchange are necessary. For demonstrating the methodology applied in MTA EK, a LBLOCA event at shut down reactor state - when only limited configuration of the Emergency Core Cooling System (ECCS) is available - was selected. In this special case, fission gas release from a number of fuel pins is obtained from the analyses. This paper describes the initiating event and the corresponding thermal hydraulic calculations and the further physical processes, the necessary models and computer codes and their connections. Additionally the applied conservative assumptions and the Best Estimate Plus Uncertainty (B+U) evaluation applied for characterizing the pin power and burnup distribution in the core are presented. Also, the fuel behavior processes. Finally, the newly developed methodology to predict whether the fuel pins are getting in-hermetic or not is described and the the results of the activity transport and dose calculations are shown.

  18. Waste management programmatic environmental impact statement methodology for estimating human health risks

    International Nuclear Information System (INIS)

    Bergenback, B.; Blaylock, B.P.; Legg, J.L.

    1995-05-01

    The US Department of Energy (DOE) has produced large quantities of radioactive and hazardous waste during years of nuclear weapons production. As a result, a large number of sites across the DOE Complex have become chemically and/or radiologically contaminated. In 1990, the Secretary of Energy charged the DOE Office of Environmental Restoration and Waste management (EM) with the task of preparing a Programmatic Environmental Impact Statement (PEIS). The PEIS should identify and assess the potential environmental impacts of implementing several integrated Environmental Restoration (ER) and Waste Management (WM) alternatives. The determination and integration of appropriate remediation activities and sound waste management practices is vital for ensuring the diminution of adverse human health impacts during site cleanup and waste management programs. This report documents the PEIS risk assessment methodology used to evaluate human health risks posed by WM activities. The methodology presents a programmatic cradle to grave risk assessment for EM program activities. A unit dose approach is used to estimate risks posed by WM activities and is the subject of this document

  19. A statistical methodology for the estimation of extreme wave conditions for offshore renewable applications

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kalogeri, Christina; Galanis, George

    2015-01-01

    and post-process outputs from a high resolution numerical wave modeling system for extreme wave estimation based on the significant wave height. This approach is demonstrated through the data analysis at a relatively deep water site, FINO 1, as well as a relatively shallow water area, coastal site Horns...... as a characteristic index of extreme wave conditions. The results from the proposed methodology seem to be in a good agreement with the measurements at both the relatively deep, open water and the shallow, coastal water sites, providing a potentially useful tool for offshore renewable energy applications. © 2015...... Rev, which is located in the North Sea, west of Denmark. The post-processing targets at correcting the modeled time series of the significant wave height, in order to match the statistics of the corresponding measurements, including not only the conventional parameters such as the mean and standard...

  20. A novel methodology to estimate the evolution of construction waste in construction sites.

    Science.gov (United States)

    Katz, Amnon; Baum, Hadassa

    2011-02-01

    This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Estimating the cost of delaying a nuclear power plant: methodology and application

    International Nuclear Information System (INIS)

    Hill, L.J.; Tepel, R.C.; Van Dyke, J.W.

    1985-01-01

    This paper presents an analysis of an actual 24-month nuclear power plant licensing delay under alternate assumptions about regulatory practice, sources of replacement power, and the cost of the plant. The analysis focuses on both the delay period and periods subsequent to the delay. The methodology utilized to simulate the impacts involved the recursive interaction of a generation-costing program to estimate fuel-replacement costs and a financial regulatory model to concomitantly determine the impact on the utility, its ratepayers, and security issues. The results indicate that a licensing delay has an adverse impact on the utility's internal generation of funds and financial indicators used to evaluate financial soundness. The direction of impact on electricity rates is contingent on the source of fuel used for replacement power. 5 references, 5 tables

  2. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Science.gov (United States)

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  3. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    Directory of Open Access Journals (Sweden)

    Suzana Papile Maciel Carvalho

    2013-07-01

    Full Text Available Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. OBJECTIVE: This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995, previously used in a population sample from Northeast Brazil. MATERIAL AND METHODS: The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. RESULTS: The results demonstrated that the application of the method of Oliveira, et al. (1995 in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. CONCLUSION: It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995 presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South

  4. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  5. Estimation of the laser cutting operating cost by support vector regression methodology

    Science.gov (United States)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  6. A practical and transferable methodology for dose estimation in irradiated spices based on thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    D'Oca, M.C.; Bartolotta, A.; Cammilleri, C.; Giuffrida, S.; Parlato, A.; Di Stefano, V.

    2008-01-01

    Full text: Among the industrial applications of ionizing radiation, the treatment of food for preservation purposes is a worldwide recognized tool, provided that proper and validated identification methods are available and used. The thermoluminescence (TL) dosimetry is the physical method validated by the European Committee for Standardization for food from which silicate minerals can be isolated, such as spices and aromatic herbs. The aim of this work was to set up a reasonably simple procedure, alternative to the recommended one, for the identification of irradiated spices and to estimate at the same time the original dose in the irradiated product, using TL and the additive dose method, even after months storage. We have already shown that the additive dose method can be applied with TL dosimetry, if the TL response of the silicate specimen after extraction is always added to the response after each irradiation; the applied added doses were higher than 1 kGy, that can however give saturation problems. The new proposed methodology makes use of added doses lower than 600 Gy; the entire process can be completed within few hours and a linear fit can be utilized. The method was applied to the silicates extracted from oregano samples soon after the radiation treatment (original dose: 2 - 3 - 5 kGy), and after one year storage at room conditions in the dark (original dose: 1-2 kGy). The procedure allows the identification of irradiated samples, without any false positive, together with an estimation of the dose range

  7. Methodology to estimate variations in solar radiation reaching densely forested slopes in mountainous terrain.

    Science.gov (United States)

    Sypka, Przemysław; Starzak, Rafał; Owsiak, Krzysztof

    2016-12-01

    Solar radiation reaching densely forested slopes is one of the main factors influencing the water balance between the atmosphere, tree stands and the soil. It also has a major impact on site productivity, spatial arrangement of vegetation structure as well as forest succession. This paper presents a methodology to estimate variations in solar radiation reaching tree stands in a small mountain valley. Measurements taken in three inter-forest meadows unambiguously showed the relationship between the amount of solar insolation and the shading effect caused mainly by the contour of surrounding tree stands. Therefore, appropriate knowledge of elevation, aspect and tilt angles of the analysed planes had to be taken into consideration during modelling. At critical times, especially in winter, the diffuse and reflected components of solar radiation only reached some of the sites studied as the beam component of solar radiation was totally blocked by the densely forested mountain slopes in the neighbourhood. The cross-section contours and elevation angles of all obstructions are estimated from a digital surface model including both digital elevation model and the height of tree stands. All the parameters in a simplified, empirical model of the solar insolation reaching a given horizontal surface within the research valley are dependent on the sky view factor (SVF). The presented simplified, empirical model and its parameterisation scheme should be easily adaptable to different complex terrains or mountain valleys characterised by diverse geometry or spatial orientation. The model was developed and validated (R 2  = 0.92 , σ = 0.54) based on measurements taken at research sites located in the Silesian Beskid Mountain Range. A thorough understanding of the factors determining the amount of solar radiation reaching woodlands ought to considerably expand the knowledge of the water exchange balance within forest complexes as well as the estimation of site

  8. Regression methodology in groundwater composition estimation with composition predictions for Romuvaara borehole KR10

    Energy Technology Data Exchange (ETDEWEB)

    Luukkonen, A.; Korkealaakso, J.; Pitkaenen, P. [VTT Communities and Infrastructure, Espoo (Finland)

    1997-11-01

    Teollisuuden Voima Oy selected five investigation areas for preliminary site studies (1987Ae1992). The more detailed site investigation project, launched at the beginning of 1993 and presently supervised by Posiva Oy, is concentrated to three investigation areas. Romuvaara at Kuhmo is one of the present target areas, and the geochemical, structural and hydrological data used in this study are extracted from there. The aim of the study is to develop suitable methods for groundwater composition estimation based on a group of known hydrogeological variables. The input variables used are related to the host type of groundwater, hydrological conditions around the host location, mixing potentials between different types of groundwater, and minerals equilibrated with the groundwater. The output variables are electrical conductivity, Ca, Mg, Mn, Na, K, Fe, Cl, S, HS, SO{sub 4}, alkalinity, {sup 3}H, {sup 14}C, {sup 13}C, Al, Sr, F, Br and I concentrations, and pH of the groundwater. The methodology is to associate the known hydrogeological conditions (i.e. input variables), with the known water compositions (output variables), and to evaluate mathematical relations between these groups. Output estimations are done with two separate procedures: partial least squares regressions on the principal components of input variables, and by training neural networks with input-output pairs. Coefficients of linear equations and trained networks are optional methods for actual predictions. The quality of output predictions are monitored with confidence limit estimations, evaluated from input variable covariances and output variances, and with charge balance calculations. Groundwater compositions in Romuvaara borehole KR10 are predicted at 10 metre intervals with both prediction methods. 46 refs.

  9. ALTERNATIVE METHODOLOGIES FOR THE ESTIMATION OF LOCAL POINT DENSITY INDEX: MOVING TOWARDS ADAPTIVE LIDAR DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-07-01

    Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for

  10. Methodology for Estimation of Flood Magnitude and Frequency for New Jersey Streams

    Science.gov (United States)

    Watson, Kara M.; Schopp, Robert D.

    2009-01-01

    Methodologies were developed for estimating flood magnitudes at the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated or slightly regulated streams in New Jersey. Regression equations that incorporate basin characteristics were developed to estimate flood magnitude and frequency for streams throughout the State by use of a generalized least squares regression analysis. Relations between flood-frequency estimates based on streamflow-gaging-station discharge and basin characteristics were determined by multiple regression analysis, and weighted by effective years of record. The State was divided into five hydrologically similar regions to refine the regression equations. The regression analysis indicated that flood discharge, as determined by the streamflow-gaging-station annual peak flows, is related to the drainage area, main channel slope, percentage of lake and wetland areas in the basin, population density, and the flood-frequency region, at the 95-percent confidence level. The standard errors of estimate for the various recurrence-interval floods ranged from 48.1 to 62.7 percent. Annual-maximum peak flows observed at streamflow-gaging stations through water year 2007 and basin characteristics determined using geographic information system techniques for 254 streamflow-gaging stations were used for the regression analysis. Drainage areas of the streamflow-gaging stations range from 0.18 to 779 mi2. Peak-flow data and basin characteristics for 191 streamflow-gaging stations located in New Jersey were used, along with peak-flow data for stations located in adjoining States, including 25 stations in Pennsylvania, 17 stations in New York, 16 stations in Delaware, and 5 stations in Maryland. Streamflow records for selected stations outside of New Jersey were included in the present study because hydrologic, physiographic, and geologic boundaries commonly extend beyond political boundaries. The StreamStats web application was developed

  11. Estimation of erosion-accumulative processes at the Inia River’s mouth near high-rise construction zones.

    Directory of Open Access Journals (Sweden)

    Sineeva Natalya

    2018-01-01

    Full Text Available Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers’ canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island. Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.

  12. Estimation of erosion-accumulative processes at the Inia River's mouth near high-rise construction zones.

    Science.gov (United States)

    Sineeva, Natalya

    2018-03-01

    Our study relevance is due to the increasing man-made impact on water bodies and associated land resources within the urban areas, as a consequence, by a change in the morphology and dynamics of Rivers' canals. This leads to the need to predict the development of erosion-accumulation processes, especially within the built-up urban areas. Purpose of the study is to develop programs on the assessment of erosion-accumulation processes at a water body, a mouth area of the Inia River, in the of perspective high-rise construction zone of a residential microdistrict, the place, where floodplain-channel complex is intensively expected to develop. Results of the study: Within the velocities of the water flow comparing, full-scale measured conditions, and calculated from the model, a slight discrepancy was recorded. This allows us to say that the numerical model reliably describes the physical processes developing in the River. The carried out calculations to assess the direction and intensity of the channel re-formations, made us possible to conclude, there was an insignificant predominance of erosion processes over the accumulative ones on the undeveloped part of the Inia River (the processes activity is noticeable only in certain areas (by the coasts and the island)). Importance of the study: The study on the erosion-accumulation processes evaluation can be used in design decisions for the future high-rise construction of this territory, which will increase their economic efficiency.

  13. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  14. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability.

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H

    2016-10-01

    The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m 2 ): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value

  15. ETE-EVAL: a methodology for D and D cost estimation

    International Nuclear Information System (INIS)

    Decobert, G.; Robic, S.; Vanel, V.

    2008-01-01

    In compliance with Article 20 of the sustainable radioactive materials and waste management act dated 28 June 2006, the CEA and AREVA are required every three years to revise the cost of decommissioning their facilities and to provide the necessary assets by constituting a dedicated fund. For the 2007 revision the CEA used ETE-EVAL V5. Similarly, AREVA reevaluated the cost of decontaminating and dismantling its facilities at La Hague, as the previous estimate in 2004 did not take into account the complete cleanup of all the structural work. ETE-EVAL V5 is a computer application designed to estimate the cost of decontamination and dismantling of basic nuclear installations (INB). It has been qualified by Bureau Veritas and audited. ETE-EVAL V5 has become the official software for cost assessment of CEA civilian and AREVA decommissioning projects. It has been used by the DPAD (Decontamination and Dismantling Projects Department) cost assessment group to estimate the cost of decommissioning some thirty facilities (cost update on completion for the dedicated fund for dismantling civilian CEA facilities) and by AREVA to estimate the cost of decommissioning its fuel cycle back-end facilities. Some necessary modifications are now being implemented to allow for the specific aspects of fuel cycle front-end facilities. The computational method is based on physical, radiological and waste inventories following a particular methodology, and on interviews with operating personnel to compile ratios and financial data (operating cost, etc.) and enter them in a database called GREEN (from the French acronym for Management Ratios for Assessment of Nuclear Facilities). ETE-EVAL V5 comprises the cost assessment module and GREEN database. It has been enriched with the lessons learned from experience, and can be adapted as necessary to meet installation-specific requirements. The cost assessment module allows the user to estimate decommissioning costs once the inventory has been

  16. A Probabilistic and Observation Based Methodology to Estimate Small Craft Harbor Vulnerability to Tsunami Events

    Science.gov (United States)

    Keen, A. S.; Lynett, P. J.; Ayca, A.

    2016-12-01

    Because of the damage resulting from the 2010 Chile and 2011 Japanese tele-tsunamis, the tsunami risk to the small craft marinas in California has become an important concern. The talk will outline an assessment tool which can be used to assess the tsunami hazard to small craft harbors. The methodology is based on the demand and structural capacity of the floating dock system, composed of floating docks/fingers and moored vessels. The structural demand is determined using a Monte Carlo methodology. Monte Carlo methodology is a probabilistic computational tool where the governing might be well known, but the independent variables of the input (demand) as well as the resisting structural components (capacity) may not be completely known. The Monte Carlo approach uses a distribution of each variable, and then uses that random variable within the described parameters, to generate a single computation. The process then repeats hundreds or thousands of times. The numerical model "Method of Splitting Tsunamis" (MOST) has been used to determine the inputs for the small craft harbors within California. Hydrodynamic model results of current speed, direction and surface elevation were incorporated via the drag equations to provide the bases of the demand term. To determine the capacities, an inspection program was developed to identify common features of structural components. A total of six harbors have been inspected ranging from Crescent City in Northern California to Oceanside Harbor in Southern California. Results from the inspection program were used to develop component capacity tables which incorporated the basic specifications of each component (e.g. bolt size and configuration) and a reduction factor (which accounts for the component reduction in capacity with age) to estimate in situ capacities. Like the demand term, these capacities are added probabilistically into the model. To date the model has been applied to Santa Cruz Harbor as well as Noyo River. Once

  17. Compilation and review of methodologies for estimating the comparative electric power system costs for renewable energy systems. Working material

    International Nuclear Information System (INIS)

    1993-01-01

    This Working Material provides a review of methodologies for estimating the costs of renewable energy systems and the state of art knowledge on stochastic features and economic evaluation methodologies of renewable energy systems for electricity generation in a grid integrated system. It is expected that this material facilitates the wider access by interested persons to sources for relevant comparative assessment activities which are progressing in the IAEA. Refs, figs, tabs

  18. Economy-wide estimates of the implications of climate change: A joint analysis for sea level rise and tourism

    Energy Technology Data Exchange (ETDEWEB)

    Bigano, A. [Fondazione Eni Enrico Mattei, Venice (Italy)]|[Ricerche per l' Economia e la Finanza, Milan (Italy); Bosello, F.; Roson, R. [Fondazione Eni Enrico Mattei, Venice (Italy)]|[Ca' Foscari Univ. of Venice (Italy); Tol, R.S.J. [Hamburg Univ. and Centre for Marine and Atmospheric Science, Hamburg (Germay). Research Unit Sustainability and Global Change]|[Vrije Universiteit, Amsterdam (Netherlands). Inst. for Environmental Studies]|[Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2007-07-01

    Climate change impacts of human life have well defined and different origins. Nevertheless in the determination of their final effects, especially those involving social-economic responses, interactions among impacts are likely to play an important role. This paper is one of the first attempts to disentangle and highlight the role of these interactions. It focuses on the economic assessment of two specific climate change impacts: sea-level rise and changes in tourism flows. By using a CGE model the two impacts categories are first analyzed separately and then jointly. Comparing the results it is shown that, even though qualitatively joint effects follow the outcomes of the disjoint exercises, quantitatively impact interaction do play a significant role. Moreover it has also been possible to disentangle the relative contribution of each single impact category to the final result. In the case under scrutiny demand shocks induced by changes in tourism flows outweigh the supply side shock induced by the loss of coastal land.

  19. A correction in the CDM methodological tool for estimating methane emissions from solid waste disposal sites.

    Science.gov (United States)

    Santos, M M O; van Elk, A G P; Romanel, C

    2015-12-01

    Solid waste disposal sites (SWDS) - especially landfills - are a significant source of methane, a greenhouse gas. Although having the potential to be captured and used as a fuel, most of the methane formed in SWDS is emitted to the atmosphere, mainly in developing countries. Methane emissions have to be estimated in national inventories. To help this task the Intergovernmental Panel on Climate Change (IPCC) has published three sets of guidelines. In addition, the Kyoto Protocol established the Clean Development Mechanism (CDM) to assist the developed countries to offset their own greenhouse gas emissions by assisting other countries to achieve sustainable development while reducing emissions. Based on methodologies provided by the IPCC regarding SWDS, the CDM Executive Board has issued a tool to be used by project developers for estimating baseline methane emissions in their project activities - on burning biogas from landfills or on preventing biomass to be landfilled and so avoiding methane emissions. Some inconsistencies in the first two IPCC guidelines have already been pointed out in an Annex of IPCC latest edition, although with hidden details. The CDM tool uses a model for methane estimation that takes on board parameters, factors and assumptions provided in the latest IPCC guidelines, while using in its core equation the one of the second IPCC edition with its shortcoming as well as allowing a misunderstanding of the time variable. Consequences of wrong ex-ante estimation of baseline emissions regarding CDM project activities can be of economical or environmental type. Example of the first type is the overestimation of 18% in an actual project on biogas from landfill in Brazil that harms its developers; of the second type, the overestimation of 35% in a project preventing municipal solid waste from being landfilled in China, which harms the environment, not for the project per se but for the undue generated carbon credits. In a simulated landfill - the same

  20. The Application of Best Estimate and Uncertainty Analysis Methodology to Large LOCA Power Pulse in a CANDU 6 Reactor

    International Nuclear Information System (INIS)

    Abdul-Razzak, A.; Zhang, J.; Sills, H.E.; Flatt, L.; Jenkins, D.; Wallace, D.J.; Popov, N.

    2002-01-01

    The paper describes briefly a best estimate plus uncertainty analysis (BE+UA) methodology and presents its proto-typing application to the power pulse phase of a limiting large Loss-of-Coolant Accident (LOCA) for a CANDU 6 reactor fuelled with CANFLEX R fuel. The methodology is consistent with and builds on world practice. The analysis is divided into two phases to focus on the dominant parameters for each phase and to allow for the consideration of all identified highly ranked parameters in the statistical analysis and response surface fits for margin parameters. The objective of this analysis is to quantify improvements in predicted safety margins under best estimate conditions. (authors)

  1. Study on methodology to estimate isotope generation and depletion for core design of HTGR

    International Nuclear Information System (INIS)

    Fukaya, Yuji; Ueta, Shohei; Goto, Minoru; Shimakawa, Satoshi

    2013-12-01

    An investigation on methodology to estimate isotope generation and depletion had been performed in order to improve the accuracy for HTGR core design. The technical problem for isotope generation and depletion can be divided into major three parts, for solving the burn-up equations, generating effective cross section and employing nuclide data. Especially for the generating effective cross section, the core burn-up calculation has a technological problem in common with point burn-up calculation. Thus, the investigation had also been performed for the core burn-up calculation to develop new code system in the future. As a result, it was found that the cross section with the extended 108 energy groups structure from the SRAC 107 groups structure to 20 MeV and the cross section collapse using the flux obtained by the deterministic code SRAC is proper for the use. In addition, it becomes clear the needs for the nuclear data from an investigation on the preparation condition for nuclear data for a safety analysis and a fuel design. (author)

  2. Developing a methodological framework for estimating water productivity indicators in water scarce regions

    Science.gov (United States)

    Mubako, S. T.; Fullerton, T. M.; Walke, A.; Collins, T.; Mubako, G.; Walker, W. S.

    2014-12-01

    Water productivity is an area of growing interest in assessing the impact of human economic activities on water resources, especially in arid regions. Indicators of water productivity can assist water users in evaluating sectoral water use efficiency, identifying sources of pressure on water resources, and in supporting water allocation rationale under scarcity conditions. This case study for the water-scarce Middle Rio Grande River Basin aims to develop an environmental-economic accounting approach for water use in arid river basins through a methodological framework that relates water use to human economic activities impacting regional water resources. Water uses are coupled to economic transactions, and the complex but mutual relations between various water using sectors estimated. A comparison is made between the calculated water productivity indicators and representative cost/price per unit volume of water for the main water use sectors. Although it contributes very little to regional economic output, preliminary results confirm that Irrigation is among the sectors with the largest direct water use intensities. High economic value and low water use intensity economic sectors in the study region include Manufacturing, Mining, and Steam Electric Power. Water accounting challenges revealed by the study include differences in water management regimes between jurisdictions, and little understanding of the impact of major economic activities on the interaction between surface and groundwater systems in this region. A more comprehensive assessment would require the incorporation of environmental and social sustainability indicators to the calculated water productivity indicators.

  3. A methodology for estimating potential doses and risks from recycling U.S. Department of Energy radioactive scrap metals

    International Nuclear Information System (INIS)

    MacKinney, J.A.

    1995-01-01

    The U.S. Environmental Protection Agency (EPA) is considering writing regulations for the controlled use of materials originating from radioactively contaminated zones which may be recyclable. These materials include metals, such as steel (carbon and stainless), nickel, copper, aluminum and lead, from the decommissioning of federal, and non-federal facilities. To develop criteria for the release of such materials, a risk analysis of all potential exposure pathways should be conducted. These pathways include direct exposure to the recycled material by the public and workers, both individual and collective, as well as numerous other potential exposure pathways in the life of the material. EPA has developed a risk assessment methodology for estimating doses and risks associated with recycling radioactive scrap metals. This methodology was applied to metal belonging to the U.S. Department of Energy. This paper will discuss the draft EPA risk assessment methodology as a tool for estimating doses and risks from recycling. (author)

  4. Methodology to Estimate the Quantity, Composition, and Management of Construction and Demolition Debris in the United States

    Science.gov (United States)

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...

  5. Methodological approaches to analysis of agricultural countermeasures on radioactive contaminated areas: Estimation of effectiveness and comparison of different alternatives

    DEFF Research Database (Denmark)

    Yatsalo, B.I.; Hedemann Jensen, P.; Alexakhin, R.M.

    1997-01-01

    Methodological aspects of countermeasure analysis in the long-term period after a nuclear accident are discussed for agriculture countermeasures for illustrative purposes. The estimates of effectiveness fbr specific countermeasures as well as methods of justified action levels assessments...... and comparison of different alternatives (countermeasures) based on the use of several criteria are considered....

  6. ISSUES ON USING PRICE INDICES FOR ESTIMATING GDP AND ITS COMPONENTS AT CONSTANT PRICES ACCORDING TO SNA METHODOLOGY

    Directory of Open Access Journals (Sweden)

    K. Prykhodko

    2014-06-01

    Full Text Available The article examines requirements and methodological approaches to the calculation of price indices (deflators in the national accounts. It gives estimation for the level and dynamics of price indicators. It proposes on improving the calculation of price indices (deflators in the national accounts of Ukraine.

  7. Australian methodology for the estimation of greenhouse gas emissions and sinks: Agriculture: Workbook for livestock: Workbook 6.0

    Energy Technology Data Exchange (ETDEWEB)

    Bureau of Resource Sciences, Canberra, ACT (Australia)

    1994-12-31

    This workbook details a methodology for estimating methane emissions from Australian livestock. The workbook is designed to be consistent with international guidelines and takes into account special Australian conditions. While regarded as a significant source of anthropogenic methane emissions, it is also acknowledged in this document that livestock do not provide sinks for methane or any other greenhouse gas. Methane can originate from both fermentation processes in the digestive tracts of all livestock and from manure under certain management conditions. Methane emissions were estimated from beef cattle, dairy cattle, sheep, pigs, poultry, goats, horses, deer, buffalo, camels, emus and ostriches, alpacas and donkeys and mules. Two methodologies were used to estimate emissions. One is the standard Intergovernmental Panel on Climate Change (IPCC) Tier 1 methodology that is needed to provide inter-country comparisons of emissions. The other has been developed by the Inventory Methodology Working Group. It represents the best current Australian method for estimating greenhouse gas emissions from Australian livestock. (author). 6 tabs., 22 refs.

  8. Methodology for estimating radiation dose rates to freshwater biota exposed to radionuclides in the environment

    International Nuclear Information System (INIS)

    Blaylock, B.G.; Frank, M.L.; O'Neal, B.R.

    1993-08-01

    The purpose of this report is to present a methodology for evaluating the potential for aquatic biota to incur effects from exposure to chronic low-level radiation in the environment. Aquatic organisms inhabiting an environment contaminated with radioactivity receive external radiation from radionuclides in water, sediment, and from other biota such as vegetation. Aquatic organisms receive internal radiation from radionuclides ingested via food and water and, in some cases, from radionuclides absorbed through the skin and respiratory organs. Dose rate equations, which have been developed previously, are presented for estimating the radiation dose rate to representative aquatic organisms from alpha, beta, and gamma irradiation from external and internal sources. Tables containing parameter values for calculating radiation doses from selected alpha, beta, and gamma emitters are presented in the appendix to facilitate dose rate calculations. The risk of detrimental effects to aquatic biota from radiation exposure is evaluated by comparing the calculated radiation dose rate to biota to the U.S. Department of Energy's (DOE's) recommended dose rate limit of 0.4 mGy h -1 (1 rad d -1 ). A dose rate no greater than 0.4 mGy h -1 to the most sensitive organisms should ensure the protection of populations of aquatic organisms. DOE's recommended dose rate is based on a number of published reviews on the effects of radiation on aquatic organisms that are summarized in the National Council on Radiation Protection and Measurements Report No. 109 (NCRP 1991). DOE recommends that if the results of radiological models or dosimetric measurements indicate that a radiation dose rate of 0. 1 mGy h -1 will be exceeded, then a more detailed evaluation of the potential ecological consequences of radiation exposure to endemic populations should be conducted

  9. A methodology to identify stranded generation facilities and estimate stranded costs for Louisiana's electric utility industry

    Science.gov (United States)

    Cope, Robert Frank, III

    1998-12-01

    The electric utility industry in the United States is currently experiencing a new and different type of growing pain. It is the pain of having to restructure itself into a competitive business. Many industry experts are trying to explain how the nation as a whole, as well as individual states, will implement restructuring and handle its numerous "transition problems." One significant transition problem for federal and state regulators rests with determining a utility's stranded costs. Stranded generation facilities are assets which would be uneconomic in a competitive environment or costs for assets whose regulated book value is greater than market value. At issue is the methodology which will be used to estimate stranded costs. The two primary methods are known as "Top-Down" and "Bottom-Up." The "Top-Down" approach simply determines the present value of the losses in revenue as the market price for electricity changes over a period of time into the future. The problem with this approach is that it does not take into account technical issues associated with the generation and wheeling of electricity. The "Bottom-Up" approach computes the present value of specific strandable generation facilities and compares the resulting valuations with their historical costs. It is regarded as a detailed and difficult, but more precise, approach to identifying stranded assets and their associated costs. This dissertation develops a "Bottom-Up" quantitative, optimization-based approach to electric power wheeling within the state of Louisiana. It optimally evaluates all production capabilities and coordinates the movement of bulk power through transmission interconnections of competing companies in and around the state. Sensitivity analysis to this approach is performed by varying seasonal consumer demand, electric power imports, and transmission inter-connection cost parameters. Generation facility economic dispatch and transmission interconnection bulk power transfers, specific

  10. A Methodology for Assessing the Impact of Sea Level Rise on Representative Military Installation in the Southwestern United States (RC-1703)

    Science.gov (United States)

    2015-04-01

    the flight of Charles A. Lindbergh from New York to Paris in May,1927. That flight originated at North Island on May 9, 1927, when Lindbergh began the...56   2-35. Watersheds at MCBCP (overlay original art; Image: Google, U.S. Geolocical Survey) .... 58   2-36. Groundwater sub-basins of the Santa...70   2-48. Long-term tide gauge data from Amsterdam, Brest , and Swinoujscie (Poland) indicates sea level rise over the past

  11. Estimating the global prevalence of inadequate zinc intake from national food balance sheets: effects of methodological assumptions.

    Directory of Open Access Journals (Sweden)

    K Ryan Wessells

    Full Text Available The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population's theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1 evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2 generate a model considered to provide the best estimates.National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation. Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12-66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57-0.99, P<0.01. A "best-estimate" model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%.Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country

  12. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  13. Methodology to estimating aquatic dispersion of effluents from accidental and routine releases

    International Nuclear Information System (INIS)

    Borges, Diogo da S.; Lava, Deise Diana; Guimarães, Antônio C.F.; Moreira, Maria L.

    2017-01-01

    This paper presents a methodology to analysis of dispersion of radioactive materials in an aquatic environment, specifically for estuaries, based on the Regulatory Guide 1.113. The objective is to present an adaptation of methodology for computational user, that it is possible by means of the use of numerical approximations techniques. The methodology to be present consist in a numerical approximation of the Navier-Stokes Equation applied in a finite medium with known transport mechanisms, such as Coriolis Effect, floor drag, diffusion, salinity, temperature difference and adhesion per water molecule. The basis of methodology is substantiated in a transport diffusive-convection equation, which has similarity with the Partial Differential Burgues' Equation for one dimension and with the Kardar-Parisi-Zhang Equation for multidimensional cases. (author)

  14. Methodology to estimating aquatic dispersion of effluents from accidental and routine releases

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Diogo da S.; Lava, Deise Diana; Guimarães, Antônio C.F.; Moreira, Maria L., E-mail: diogosb@outlook.com, E-mail: deise_dy@hotmail.com, E-mail: tony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper presents a methodology to analysis of dispersion of radioactive materials in an aquatic environment, specifically for estuaries, based on the Regulatory Guide 1.113. The objective is to present an adaptation of methodology for computational user, that it is possible by means of the use of numerical approximations techniques. The methodology to be present consist in a numerical approximation of the Navier-Stokes Equation applied in a finite medium with known transport mechanisms, such as Coriolis Effect, floor drag, diffusion, salinity, temperature difference and adhesion per water molecule. The basis of methodology is substantiated in a transport diffusive-convection equation, which has similarity with the Partial Differential Burgues' Equation for one dimension and with the Kardar-Parisi-Zhang Equation for multidimensional cases. (author)

  15. Heat flux estimate of warm water flow in a low-temperature diffuse flow site, southern East Pacific Rise 17°25‧ S

    Science.gov (United States)

    Goto, Shusaku; Kinoshita, Masataka; Mitsuzawa, Kyohiko

    2003-09-01

    A low-temperature diffuse flow site associated with abundant vent fauna was found by submersible observations on the southern East Pacific Rise at 17°25‧ S in 1997. This site was characterized by thin sediment covered pillow and sheet lavas with collapsed pits up to ˜15 m in diameter. There were three warm water vents (temperature: 6.5 to 10.5 °C) within the site above which the vented fluids rise as plumes. To estimate heat flux of the warm water vents, a temperature logger array was deployed and the vertical temperature distribution in the water column up to 38 m above the seafloor was monitored. A stationary deep seafloor observatory system was also deployed to monitor hydrothermal activity in this site. The temperature logger array measured temperature anomalies, while the plumes from the vents passed through the array. Because the temperature anomalies were measured in only specific current directions, we identified one of the vents as the source. Heat flux from the vent was estimated by applying a plume model in crossflow in a density-stratified environment. The average heat flux from September 13 to October 18, 1997 was 39 MW. This heat flux is as same order as those of high-temperature black smokers, indicating that a large volume flux was discharged from the vent (1.9 m3/s). Previous observations found many similar warm water flow vents along the spreading axis between 17°20‧ S 30‧ S. The total heat flux was estimated to be at least a few hundred mega-watts. This venting style would contribute to form effluent hydrothermal plumes extended above the spreading axis.

  16. Estimating the Greenland ice sheet surface mass balance contribution to future sea level rise using the regional atmospheric climate model MAR

    Directory of Open Access Journals (Sweden)

    X. Fettweis

    2013-03-01

    Full Text Available To estimate the sea level rise (SLR originating from changes in surface mass balance (SMB of the Greenland ice sheet (GrIS, we present 21st century climate projections obtained with the regional climate model MAR (Modèle Atmosphérique Régional, forced by output of three CMIP5 (Coupled Model Intercomparison Project Phase 5 general circulation models (GCMs. Our results indicate that in a warmer climate, mass gain from increased winter snowfall over the GrIS does not compensate mass loss through increased meltwater run-off in summer. Despite the large spread in the projected near-surface warming, all the MAR projections show similar non-linear increase of GrIS surface melt volume because no change is projected in the general atmospheric circulation over Greenland. By coarsely estimating the GrIS SMB changes from GCM output, we show that the uncertainty from the GCM-based forcing represents about half of the projected SMB changes. In 2100, the CMIP5 ensemble mean projects a GrIS SMB decrease equivalent to a mean SLR of +4 ± 2 cm and +9 ± 4 cm for the RCP (Representative Concentration Pathways 4.5 and RCP 8.5 scenarios respectively. These estimates do not consider the positive melt–elevation feedback, although sensitivity experiments using perturbed ice sheet topographies consistent with the projected SMB changes demonstrate that this is a significant feedback, and highlight the importance of coupling regional climate models to an ice sheet model. Such a coupling will allow the assessment of future response of both surface processes and ice-dynamic changes to rising temperatures, as well as their mutual feedbacks.

  17. A methodology for the estimation of release of fission products during LOCA with loss of ECCS

    International Nuclear Information System (INIS)

    Lele, H.G.; Majumdar, P.; Mukhopadhyay, D.; Gupta, S.K.; Venkat Raj, V.

    2002-01-01

    A Loss of Coolant Accident (LOCA) in a nuclear reactor along with the failure of the Emergency Core Cooling System can cause sustained voiding of the core. In such a situation the core experiences very low flow which leads to poor heat removal from the reactor core. The heat to be removed from the core includes stored heat, heat generated due to metal water reaction at high temperatures, decay heat etc. The poor heat removal leads to heating of the fuel pins to high temperatures. The heating of fuel pins is further enhanced due to metal-water reaction at high temperatures. These high temperatures of the fuel pins can lead to fission product release, which is transported into the Primary Heat Transport (PHT) system and can enter the containment through the break. Analysis is involved due to the complexity of the system and the phenomena to be simulated. In this paper a multistage analysis methodology is presented that involves the development and application of a number of computer programs to model the various phenomena involved. The computer code PHTACT computes the activity release from the fuel as a function of fuel temperatures and cladding oxidation, its distribution into the PHT system and release into the containment. Computation of thermal hydraulic parameters during LOCA is done using the thermal hydraulic analysis code RELAP5. The detailed simulation of fuel pin temperatures is done using computer code HT/MOD4. The convective boundary conditions required for the code are obtained from RELAP5. Creep deformation is considered in the computation of dimensional changes of the coolant channel and estimation of flow blockage due to clad ballooning. The progression of various reaction layers due to high temperature reaction between fuel and clad and clad and steam is also computed, which affects the structural strength of the clad. Different approaches are possible and analysis can be carried out in different phases depending upon the complexities to be

  18. An Efficient Power Estimation Methodology for Complex RISC Processor-based Platforms

    OpenAIRE

    Rethinagiri , Santhosh Kumar; Ben Atitallah , Rabie; Dekeyser , Jean-Luc; Niar , Smail; Senn , Eric

    2012-01-01

    International audience; In this contribution, we propose an efficient power estima- tion methodology for complex RISC processor-based plat- forms. In this methodology, the Functional Level Power Analysis (FLPA) is used to set up generic power models for the different parts of the system. Then, a simulation framework based on virtual platform is developed to evalu- ate accurately the activities used in the related power mod- els. The combination of the two parts above leads to a het- erogeneou...

  19. Using Rising Limb Analysis to Estimate Uptake of Reactive Solutes in Advective and Transient Storage Sub-compartments of Stream Ecosystems

    Science.gov (United States)

    Thomas, S. A.; Valett, H.; Webster, J. R.; Mulholland, P. J.; Dahm, C. N.

    2001-12-01

    Identifying the locations and controls governing solute uptake is a recent area of focus in studies of stream biogeochemistry. We introduce a technique, rising limb analysis (RLA), to estimate areal nitrate uptake in the advective and transient storage (TS) zones of streams. RLA is an inverse approach that combines nutrient spiraling and transient storage modeling to calculate total uptake of reactive solutes and the fraction of uptake occurring within the advective sub-compartment of streams. The contribution of the transient storage zones to solute loss is determined by difference. Twelve-hour coinjections of conservative (Cl-) and reactive (15NO3) tracers were conducted seasonally in several headwater streams among which AS/A ranged from 0.01 - 2.0. TS characteristics were determined using an advection-dispersion model modified to include hydrologic exchange with a transient storage compartment. Whole-system uptake was determined by fitting the longitudinal pattern of NO3 to first-order, exponential decay model. Uptake in the advective sub-compartment was determined by collecting a temporal sequence of samples from a single location beginning with the arrival of the solute front and concluding with the onset of plateau conditions (i.e. the rising limb). Across the rising limb, 15NO3:Cl was regressed against the percentage of water that had resided in the transient storage zone (calculated from the TS modeling). The y-intercept thus provides an estimate of the plateau 15NO3:Cl ratio in the absence of NO3 uptake within the transient storage zone. Algebraic expressions were used to calculate the percentage of NO3 uptake occurring in the advective and transient storage sub-compartments. Application of RLA successfully estimated uptake coefficients for NO3 in the subsurface when the physical dimensions of that habitat were substantial (AS/A > 0.2) and when plateau conditions at the sampling location consisted of waters in which at least 25% had resided in the

  20. Accelerated lifetime testing methodology for lifetime estimation of Lithium-ion batteries used in augmented wind power plants

    DEFF Research Database (Denmark)

    Stroe, Daniel Ioan; Swierczynski, Maciej Jozef; Stan, Ana-Irina

    2013-01-01

    The development of lifetime estimation models for Lithium-ion battery cells, which are working under highly variable mission profiles characteristic for wind power plant applications, requires a lot of expenditures and time resources. Therefore, batteries have to be tested under accelerated...... lifetime ageing conditions. This paper presents a three-stage methodology used for accelerated lifetime testing of Lithium-ion batteries. The results obtained at the end of the accelerated ageing process can be used for the parametrization of a performance-degradation lifetime model. In the proposed...... methodology both calendar and cycling lifetime tests are considered since both components are influencing the lifetime of Lithium-ion batteries. The methodology proposes also a lifetime model verification stage, where Lithium-ion battery cells are tested at normal operating conditions using an application...

  1. Accelerated Lifetime Testing Methodology for Lifetime Estimation of Lithium-ion Batteries used in Augmented Wind Power Plants

    DEFF Research Database (Denmark)

    Stroe, Daniel Ioan; Swierczynski, Maciej Jozef; Stan, Ana-Irina

    2014-01-01

    The development of lifetime estimation models for Lithium-ion battery cells, which are working under highly variable mission profiles characteristic for wind power plant applications, requires a lot of expenditures and time resources. Therefore, batteries have to be tested under accelerated...... lifetime ageing conditions. This paper presents a three-stage methodology used for accelerated lifetime testing of Lithium ion batteries. The results obtained at the end of the accelerated ageing process were used for the parametrization of a performance-degradation lifetime model, which is able to predict...... both the capacity fade and the power capability decrease of the selected Lithium-ion battery cells. In the proposed methodology both calendar and cycling lifetime tests were considered since both components are influencing the lifetime of Lithium-ion batteries. Furthermore, the proposed methodology...

  2. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  3. Pulse superimposition calculational methodology for estimating the subcriticality level of nuclear fuel assemblies

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Rabiti, C.; Aliberti, G.; Kondev, F.; Smith, D.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I.

    2009-01-01

    One of the most reliable experimental methods for measuring the subcriticality level of a nuclear fuel assembly is the Sjoestrand method applied to the reaction rate generated from a pulsed neutron source. This study developed a new analytical methodology simulating the Sjoestrand method, which allows comparing the experimental and analytical reaction rates and the obtained subcriticality levels. In this methodology, the reaction rate is calculated due to a single neutron pulse using MCNP/MCNPX computer code or any other neutron transport code that explicitly simulates the delayed fission neutrons. The calculation simulates a single neutron pulse over a long time period until the delayed neutron contribution to the reaction rate is vanished. The obtained reaction rate is then superimposed to itself, with respect to the time, to simulate the repeated pulse operation until the asymptotic level of the reaction rate, set by the delayed neutrons, is achieved. The superimposition of the pulse to itself was calculated by a simple C computer program. A parallel version of the C program is used due to the large amount of data being processed, e.g. by the Message Passing Interface (MPI). The analytical results of this new calculation methodology have shown an excellent agreement with the experimental data available from the YALINA-Booster facility of Belarus. This methodology can be used to calculate Bell and Glasstone spatial correction factor.

  4. Pulse superimposition calculational methodology for estimating the subcriticality level of nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: atalamo@anl.gov; Gohar, Y. [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Rabiti, C. [Idaho National Laboratory, P.O. Box 2528, Idaho Falls, ID 83403 (United States); Aliberti, G.; Kondev, F.; Smith, D.; Zhong, Z. [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I. [Joint Institute for Power and Nuclear Research-Sosny, National Academy of Sciences (Belarus)

    2009-07-21

    One of the most reliable experimental methods for measuring the subcriticality level of a nuclear fuel assembly is the Sjoestrand method applied to the reaction rate generated from a pulsed neutron source. This study developed a new analytical methodology simulating the Sjoestrand method, which allows comparing the experimental and analytical reaction rates and the obtained subcriticality levels. In this methodology, the reaction rate is calculated due to a single neutron pulse using MCNP/MCNPX computer code or any other neutron transport code that explicitly simulates the delayed fission neutrons. The calculation simulates a single neutron pulse over a long time period until the delayed neutron contribution to the reaction rate is vanished. The obtained reaction rate is then superimposed to itself, with respect to the time, to simulate the repeated pulse operation until the asymptotic level of the reaction rate, set by the delayed neutrons, is achieved. The superimposition of the pulse to itself was calculated by a simple C computer program. A parallel version of the C program is used due to the large amount of data being processed, e.g. by the Message Passing Interface (MPI). The analytical results of this new calculation methodology have shown an excellent agreement with the experimental data available from the YALINA-Booster facility of Belarus. This methodology can be used to calculate Bell and Glasstone spatial correction factor.

  5. Proposed methodology for estimating the impact of highway improvements on urban air pollution.

    Science.gov (United States)

    1971-01-01

    The aim of this methodology is to indicate the expected change in ambient air quality in the vicinity of a highway improvement and in the total background level of urban air pollution resulting from the highway improvement. Both the jurisdiction in w...

  6. Introducing a methodology for estimating duration of surgery in health services research.

    Science.gov (United States)

    Redelmeier, Donald A; Thiruchelvam, Deva; Daneman, Nick

    2008-09-01

    The duration of surgery is an indicator for the quality, risks, and efficiency of surgical procedures. We introduce a new methodology for assessing the duration of surgery based on anesthesiology billing records, along with reviewing its fundamental logic and limitations. The validity of the methodology was assessed through a population-based cohort of patients (n=480,986) undergoing elective operations in 246 Ontario hospitals with 1,084 anesthesiologists between April 1, 1992 and March 31, 2002 (10 years). The weaknesses of the methodology relate to missing data, self-serving exaggerations by providers, imprecisions from clinical diversity, upper limits due to accounting regulations, fluctuations from updates over the years, national differences in reimbursement schedules, and the general failings of claims base analyses. The strengths of the methodology are in providing data that match clinical experiences, correspond to chart review, are consistent over time, can detect differences where differences would be anticipated, and might have implications for examining patient outcomes after long surgical times. We suggest that an understanding and application of large studies of surgical duration may help scientists explore selected questions concerning postoperative complications.

  7. A Decision Tool to Evaluate Budgeting Methodologies for Estimating Facility Recapitalization Requirements

    National Research Council Canada - National Science Library

    Hickman, Krista M

    2008-01-01

    .... Specifically, the thesis sought to answer an overarching research question addressing the importance of recapitalization and the best method to estimate the facility recapitalization budget using...

  8. Scientifically-methodological aspects of agroecological estimation of farmlands in the conditions of radioactive pollution

    International Nuclear Information System (INIS)

    Tsybul'ko, N.N.; Misyuchik, A.A.

    2009-01-01

    Methodical aspects of adaptive land tenure in the conditions of radioactive pollution on the basis of an agroecological estimation of the earths under the radiating factor and an estimation of influence of soil-landscape conditions on migration radionuclide are proved. (authors)

  9. A methodology for the optimization of the estimation of tritium in urine by liquid scintillation counting

    International Nuclear Information System (INIS)

    Joseph, S.; Kramer, G.H.

    1982-10-01

    A method has been designed to optimize liquid scintillation (LS) urinalysis with respect to sensitivity and cost. Three related factors, quench, sample composition and counting efficiency, were measured simultaneously and the results plotted in three dimensions to determine the optimum conditions for urinalysis. Picric acid was used to simulate quenching. Subsequent urinalysis experiments showed that quenching by picric acid was analogous to urine quenching. The optimization methodology was applied to ten commercial LS cocktails and a wide divergence in results was obtained. This method can also be used to optimize minimum detectable activities (MDA) but the results show that there is no fixed sample composition that can be used for all the various types of urine samples; however, it is possible to achieve general improvements of at least a factor of 2 in the MDA for Scintiverse (the only one tested for this particular application of the methodology)

  10. A Methodology for the Estimation of the Wind Generator Economic Efficiency

    Science.gov (United States)

    Zaleskis, G.

    2017-12-01

    Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.

  11. A Methodology of Estimation on Air Pollution and Its Health Effects in Large Japanese Cities

    OpenAIRE

    Hirota, Keiko; Shibuya, Satoshi; Sakamoto, Shogo; Kashima, Shigeru

    2012-01-01

    The correlation between air pollution and health effects in large Japanese cities presents a great challenge owing to the limited availability of data on the exposure to pollution, health effects and the uncertainty of mixed causes. A methodology for quantitative relationships (between the emission volume and air quality, and the air quality and health effects) is analysed with a statistical method in this article; the correlation of air pollution reduction policy in Japan from 1974 to 2007. ...

  12. Strategy to evaluate persistent contaminant hazards resulting from sea-level rise and storm-derived disturbances—Study design and methodology for station prioritization

    Science.gov (United States)

    Reilly, Timothy J.; Jones, Daniel K.; Focazio, Michael J.; Aquino, Kimberly C.; Carbo, Chelsea L.; Kaufhold, Erika E.; Zinecker, Elizabeth K.; Benzel, William M.; Fisher, Shawn C.; Griffin, Dale W.; Iwanowicz, Luke R.; Loftin, Keith A.; Schill, William B.

    2015-10-26

    Coastal communities are uniquely vulnerable to sea-level rise (SLR) and severe storms such as hurricanes. These events enhance the dispersion and concentration of natural and anthropogenic chemicals and pathogenic microorganisms that could adversely affect the health and resilience of coastal communities and ecosystems in coming years. The U.S. Geological Survey has developed a strategy to define baseline and post-event sediment-bound environmental health (EH) stressors (hereafter referred to as the Sediment-Bound Contaminant Resiliency and Response [SCoRR] strategy). A tiered, multimetric approach will be used to (1) identify and map contaminant sources and potential exposure pathways for human and ecological receptors, (2) define the baseline mixtures of EH stressors present in sediments and correlations of relevance, (3) document post-event changes in EH stressors present in sediments, and (4) establish and apply metrics to quantify changes in coastal resilience associated with sediment-bound contaminants. Integration of this information provides a means to improve assessment of the baseline status of a complex system and the significance of changes in contaminant hazards due to storm-induced (episodic) and SLR (incremental) disturbances. This report describes the purpose and design of the SCoRR strategy and the methods used to construct a decision support tool to identify candidate sampling stations vulnerable to contaminants that may be mobilized by coastal storms.

  13. Rising equity

    International Nuclear Information System (INIS)

    Burr, M.T.

    1992-01-01

    This article reports on the results of a financial rankings survey of the independent energy industry indicating that lenders and investors provided more than five billion dollars in capital for new, private power projects during the first six months of 1992. The topics of the article include rising equity requirements, corporate finance, mergers and acquisitions, project finance investors, revenue bonds, project finance lenders for new projects, project finance lenders for restructurings, and project finance advisors

  14. Estimates of emergency operating capacity in US manufacturing and nonmanufacturing industries - Volume 1: Concepts and Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Belzer, D.B. (Pacific Northwest Lab., Richland, WA (USA)); Serot, D.E. (D/E/S Research, Richland, WA (USA)); Kellogg, M.A. (ERCE, Inc., Portland, OR (USA))

    1991-03-01

    Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)

  15. Procedures for estimating the radiation dose in the vicinity of uranium mines and mills by direct calculation methodology

    International Nuclear Information System (INIS)

    Coelho, C.P.

    1983-01-01

    A methodology for estimating the radiation doses to the members of the general public, in the vicinity of uranium mines and mills is presented. The data collected in the surveys performed to characterize the neighborhood of the site, and used in this work to estimate the radiation dose, are required by the Regulatory Body, for the purpose of Licensing. Initially, a description is shown of the main processing steps to obtain the uranium concentrate and the critical instalation radionuclides are identified. Following, some studies required to characterize the facility neighborhood are presented, specially those related to geography, demography, metheorology, hydrology and environmental protection. Also, the basic programs for monitoring the facility neighborhood in the pre-operational and operational phases are included. It is then proposed a procedure to estimate inhalation, ingestion and external doses. As an example, the proposed procedure is applied to a hypotetical site. Finally, some aspects related to the applicability of this work are discussed. (Author) [pt

  16. A Methodology for Assessing the Impact of Sea Level Rise on Representative Military Installations in the Southwestern United States (RC-1703)

    Science.gov (United States)

    2014-03-03

    Trevino, Mr. Chris Stathos and Mr. John Crow. Beyond the report authors, a number of people provided contributions and support to the effort. At SSC...be combined to estimate the overall nearshore wave heights along the coast (Longuet‐ Higgins , 1957; O’Reilly and Guza, 1991; O’Reilly et al., 1993...Environmental Risk. United Nations University Press, Tokyo, pp. 201‐216. Longuet‐ Higgins , M.S., 1957. On the transformation of a continuous spectrum by

  17. Methodology for cost estimate in projects for nuclear power plants decommissioning

    International Nuclear Information System (INIS)

    Salij, L.M.

    2008-01-01

    The conceptual approaches to cost estimating of nuclear power plants units decommissioning projects were determined. The international experience and national legislative and regulatory basis were analyzed. The possible decommissioning project cost classification was given. It was shown the role of project costs of nuclear power plant units decommissioning as the most important criterion for the main project decisions. The technical and economic estimation of deductions to common-branch fund of decommissioning projects financing was substantiated

  18. A GIS-based methodology for the estimation of potential volcanic damage and its application to Tenerife Island, Spain

    Science.gov (United States)

    Scaini, C.; Felpeto, A.; Martí, J.; Carniel, R.

    2014-05-01

    This paper presents a GIS-based methodology to estimate damages produced by volcanic eruptions. The methodology is constituted by four parts: definition and simulation of eruptive scenarios, exposure analysis, vulnerability assessment and estimation of expected damages. Multi-hazard eruptive scenarios are defined for the Teide-Pico Viejo active volcanic complex, and simulated through the VORIS tool. The exposure analysis identifies the elements exposed to the hazard at stake and focuses on the relevant assets for the study area. The vulnerability analysis is based on previous studies on the built environment and complemented with the analysis of transportation and urban infrastructures. Damage assessment is performed associating a qualitative damage rating to each combination of hazard and vulnerability. This operation consists in a GIS-based overlap, performed for each hazardous phenomenon considered and for each element. The methodology is then automated into a GIS-based tool using an ArcGIS® program. Given the eruptive scenarios and the characteristics of the exposed elements, the tool produces expected damage maps. The tool is applied to the Icod Valley (North of Tenerife Island) which is likely to be affected by volcanic phenomena in case of eruption from both the Teide-Pico Viejo volcanic complex and North-West basaltic rift. Results are thematic maps of vulnerability and damage that can be displayed at different levels of detail, depending on the user preferences. The aim of the tool is to facilitate territorial planning and risk management in active volcanic areas.

  19. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    Science.gov (United States)

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  20. New methodology for estimating biofuel consumption for cooking: Atmospheric emissions of black carbon and sulfur dioxide from India

    Science.gov (United States)

    Habib, Gazala; Venkataraman, Chandra; Shrivastava, Manish; Banerjee, Rangan; Stehr, J. W.; Dickerson, Russell R.

    2004-09-01

    The dominance of biofuel combustion emissions in the Indian region, and the inherently large uncertainty in biofuel use estimates based on cooking energy surveys, prompted the current work, which develops a new methodology for estimating biofuel consumption for cooking. This is based on food consumption statistics, and the specific energy for food cooking. Estimated biofuel consumption in India was 379 (247-584) Tg yr-1. New information on the user population of different biofuels was compiled at a state level, to derive the biofuel mix, which varied regionally and was 74:16:10%, respectively, of fuelwood, dung cake and crop waste, at a national level. Importantly, the uncertainty in biofuel use from quantitative error assessment using the new methodology is around 50%, giving a narrower bound than in previous works. From this new activity data and currently used black carbon emission factors, the black carbon (BC) emissions from biofuel combustion were estimated as 220 (65-760) Gg yr-1. The largest BC emissions were from fuelwood (75%), with lower contributions from dung cake (16%) and crop waste (9%). The uncertainty of 245% in the BC emissions estimate is now governed by the large spread in BC emission factors from biofuel combustion (122%), implying the need for reducing this uncertainty through measurements. Emission factors of SO2 from combustion of biofuels widely used in India were measured, and ranged 0.03-0.08 g kg-1 from combustion of two wood species, 0.05-0.20 g kg-1 from 10 crop waste types, and 0.88 g kg-1 from dung cake, significantly lower than currently used emission factors for wood and crop waste. Estimated SO2 emissions from biofuels of 75 (36-160) Gg yr-1 were about a factor of 3 lower than that in recent studies, with a large contribution from dung cake (73%), followed by fuelwood (21%) and crop waste (6%).

  1. Estimating significances of differences between slopes: A new methodology and software

    Directory of Open Access Journals (Sweden)

    Vasco M. N. C. S. Vieira

    2013-09-01

    Full Text Available Determining the significance of slope differences is a common requirement in studies of self-thinning, ontogeny and sexual dimorphism, among others. This has long been carried out testing for the overlap of the bootstrapped 95% confidence intervals of the slopes. However, the numerical random re-sampling with repetition favours the occurrence of re-combinations yielding largely diverging slopes, widening the confidence intervals and thus increasing the chances of overlooking significant differences. To overcome this problem a permutation test simulating the null hypothesis of no differences between slopes is proposed. This new methodology, when applied both to artificial and factual data, showed an enhanced ability to differentiate slopes.

  2. Development and validation of a CFD based methodology to estimate the pressure loss of flow through perforated plates

    International Nuclear Information System (INIS)

    Barros Filho, Jose A.; Navarro, Moyses A.; Santos, Andre A.C. dos; Jordao, E.

    2011-01-01

    In spite of the recent great development of Computational Fluid Dynamics (CFD), there are still some issues about how to assess its accurateness. This work presents the validation of a CFD methodology devised to estimate the pressure drop of water flow through perforated plates similar to the ones used in some reactor core components. This was accomplished by comparing the results of CFD simulations against experimental data of 5 perforated plates with different geometric characteristics. The proposed methodology correlates the experimental data within a range of ± 7.5%. The validation procedure recommended by the ASME Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer-V and V 20 is also evaluated. The conclusion is that it is not adequate to this specific use. (author)

  3. Trip Energy Estimation Methodology and Model Based on Real-World Driving Data for Green Routing Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Van Til, Harrison J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-09

    A data-informed model to predict energy use for a proposed vehicle trip has been developed in this paper. The methodology leverages nearly 1 million miles of real-world driving data to generate the estimation model. Driving is categorized at the sub-trip level by average speed, road gradient, and road network geometry, then aggregated by category. An average energy consumption rate is determined for each category, creating an energy rates look-up table. Proposed vehicle trips are then categorized in the same manner, and estimated energy rates are appended from the look-up table. The methodology is robust and applicable to almost any type of driving data. The model has been trained on vehicle global positioning system data from the Transportation Secure Data Center at the National Renewable Energy Laboratory and validated against on-road fuel consumption data from testing in Phoenix, Arizona. The estimation model has demonstrated an error range of 8.6% to 13.8%. The model results can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations to reduce energy consumption. This work provides a highly extensible framework that allows the model to be tuned to a specific driver or vehicle type.

  4. Estimation dose in patients of nuclear medicine. Implementation of a calculi program and methodology

    International Nuclear Information System (INIS)

    Prieto, C.; Espana, M.L.; Tomasi, L.; Lopez Franco, P.

    1998-01-01

    Our hospital is developing a nuclear medicine quality assurance program in order to comply with medical exposure Directive 97/43 EURATOM and the legal requirements established in our legislation. This program includes the quality control of equipment and, in addition, the dose estimation in patients undergoing nuclear medicine examinations. This paper is focused in the second aspect, and presents a new computer program, developed in our Department, in order to estimate the absorbed dose in different organs and the effective dose to the patients, based upon the data from the ICRP publication 53 and its addendum. (Author) 16 refs

  5. Other best-estimate code and methodology applications in addition to licensing

    International Nuclear Information System (INIS)

    Tanarro, A.

    1999-01-01

    Along with their applications for licensing purposes, best-estimate thermalhydraulic codes allow for a wide scope of additional uses and applications, in which as realistic and realizable results as possible are necessary. Although many of these applications have been successfully developed nowadays, the use of best-estimate codes for applications other than those associated to licensing processes is not so well known among the nuclear community. This issue shows some of these applications, briefly describing their more significant and specific features. (Author)

  6. Methodology for estimation of time-dependent surface heat flux due to cryogen spray cooling.

    Science.gov (United States)

    Tunnell, James W; Torres, Jorge H; Anvari, Bahman

    2002-01-01

    Cryogen spray cooling (CSC) is an effective technique to protect the epidermis during cutaneous laser therapies. Spraying a cryogen onto the skin surface creates a time-varying heat flux, effectively cooling the skin during and following the cryogen spurt. In previous studies mathematical models were developed to predict the human skin temperature profiles during the cryogen spraying time. However, no studies have accounted for the additional cooling due to residual cryogen left on the skin surface following the spurt termination. We formulate and solve an inverse heat conduction (IHC) problem to predict the time-varying surface heat flux both during and following a cryogen spurt. The IHC formulation uses measured temperature profiles from within a medium to estimate the surface heat flux. We implement a one-dimensional sequential function specification method (SFSM) to estimate the surface heat flux from internal temperatures measured within an in vitro model in response to a cryogen spurt. Solution accuracy and experimental errors are examined using simulated temperature data. Heat flux following spurt termination appears substantial; however, it is less than that during the spraying time. The estimated time-varying heat flux can subsequently be used in forward heat conduction models to estimate temperature profiles in skin during and following a cryogen spurt and predict appropriate timing for onset of the laser pulse.

  7. Support vector regression methodology for estimating global solar radiation in Algeria

    Science.gov (United States)

    Guermoui, Mawloud; Rabehi, Abdelaziz; Gairaa, Kacem; Benkaciali, Said

    2018-01-01

    Accurate estimation of Daily Global Solar Radiation (DGSR) has been a major goal for solar energy applications. In this paper we show the possibility of developing a simple model based on the Support Vector Regression (SVM-R), which could be used to estimate DGSR on the horizontal surface in Algeria based only on sunshine ratio as input. The SVM model has been developed and tested using a data set recorded over three years (2005-2007). The data was collected at the Applied Research Unit for Renewable Energies (URAER) in Ghardaïa city. The data collected between 2005-2006 are used to train the model while the 2007 data are used to test the performance of the selected model. The measured and the estimated values of DGSR were compared during the testing phase statistically using the Root Mean Square Error (RMSE), Relative Square Error (rRMSE), and correlation coefficient (r2), which amount to 1.59(MJ/m2), 8.46 and 97,4%, respectively. The obtained results show that the SVM-R is highly qualified for DGSR estimation using only sunshine ratio.

  8. Estimation of nitrogen fixation in Leucaena leucocephala using 15N-enrichment methodologies

    Science.gov (United States)

    John A. Parrotta; Dwight D. Baker; Maurice Fried

    1994-01-01

    An estimation of biological nitrogen fixation by Leucaena leucocephala (Lam.) de Wit in monoculture and mixed-species plantations (with Casuarina equisetifolia L. ex J.R. & G. Forst., and Eucalyptus robusta Sm.) was undertaken over a two-year period in Puerto Rico using the 15N-enrichment...

  9. Application of 15N-enrichment methodologies to estimate nitrogen fixation in Casuarina equisetifolia

    Science.gov (United States)

    John A. Parrotta; Dwight D. Baker; Maurice Fried

    1994-01-01

    The 15N-enrichment technique for estimating biological nitrogen fixation in Casuarina equisetifolia J.R. & G. Forst. was evaluated under field conditions in single-species and mixed-species plantings (with a nonfixing reference species, Eucalyptus X robusta J.E. Smith) between...

  10. Estimating the cost of accidents and ill-health at work : a review of methodologies

    NARCIS (Netherlands)

    Weerd, M. de; Tierney, R.; Duuren-Stuurman, B. van; Bertranou, E.

    2014-01-01

    What is the real price to pay for not investing in occupational safety and health? Many studies have previously tackled this question by evaluating the costs of poor or non-existent safety and health at work. This report reviews a selection of these studies and analyses the estimating methods used

  11. A methodological framework of travel time distribution estimation for urban signalized arterial roads

    NARCIS (Netherlands)

    Zheng, Fangfang; van Zuylen, H.J.; Liu, Xiaobo

    2017-01-01

    Urban travel times are rather variable as a result of a lot of stochastic factors both in traffic flows, signals, and other conditions on the infrastructure. However, the most common way both in literature and practice is to estimate or predict only expected travel times, not travel time

  12. METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL

    Directory of Open Access Journals (Sweden)

    Tomasz HANISZEWSKI

    2015-03-01

    Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].

  13. Potential of neuro-fuzzy methodology to estimate noise level of wind turbines

    Science.gov (United States)

    Nikolić, Vlastimir; Petković, Dalibor; Por, Lip Yee; Shamshirband, Shahaboddin; Zamani, Mazdak; Ćojbašić, Žarko; Motamedi, Shervin

    2016-01-01

    Wind turbines noise effect became large problem because of increasing of wind farms numbers since renewable energy becomes the most influential energy sources. However, wind turbine noise generation and propagation is not understandable in all aspects. Mechanical noise of wind turbines can be ignored since aerodynamic noise of wind turbine blades is the main source of the noise generation. Numerical simulations of the noise effects of the wind turbine can be very challenging task. Therefore in this article soft computing method is used to evaluate noise level of wind turbines. The main goal of the study is to estimate wind turbine noise in regard of wind speed at different heights and for different sound frequency. Adaptive neuro-fuzzy inference system (ANFIS) is used to estimate the wind turbine noise levels.

  14. Estimating Limits for the Geothermal Energy Potential of Abandoned Underground Coal Mines: A Simple Methodology

    Directory of Open Access Journals (Sweden)

    Rafael Rodríguez Díez

    2014-07-01

    Full Text Available Flooded mine workings have good potential as low-enthalpy geothermal resources, which could be used for heating and cooling purposes, thus making use of the mines long after mining activity itself ceases. It would be useful to estimate the scale of the geothermal potential represented by abandoned and flooded underground mines in Europe. From a few practical considerations, a procedure has been developed for assessing the geothermal energy potential of abandoned underground coal mines, as well as for quantifying the reduction in CO2 emissions associated with using the mines instead of conventional heating/cooling technologies. On this basis the authors have been able to estimate that the geothermal energy available from underground coal mines in Europe is on the order of several thousand megawatts thermal. Although this is a gross value, it can be considered a minimum, which in itself vindicates all efforts to investigate harnessing it.

  15. Development of the methodology for estimation of dose from a source

    International Nuclear Information System (INIS)

    Golebaone, E.M.

    2012-04-01

    The geometry of a source plays an important role when determining which method to apply in order to accurately estimate dose from a source. If wrong source geometry is used the dose received may be underestimated or overestimated therefore this may lead to wrong decision in dealing with the exposure situation. In this project moisture density gauge was used to represent a point source in order to demonstrate the key parameters to be used when estimating dose from point source. The parameters to be considered are activity of the source, the ambient dose rate, gamma constant for the radionuclide, as well as the transport index on the package of the source. The distance from the source, and the time spent in the radiation field must be known in order to calculate the dose. (author)

  16. An ultrasonic methodology to non-destructively estimate the grain orientation in an anisotropic weld

    Directory of Open Access Journals (Sweden)

    Wirdelius Håkan

    2014-06-01

    Full Text Available The initial step towards a non-destructive technique that estimates grain orientation in an anisotropic weld is presented in this paper. The purpose is to aid future forward simulations of ultrasonic NDT of this kind of weld to achieve a better result. A forward model that consists of a weld model, a transmitter model, a receiver model and a 2D ray tracing algorithm is introduced. An inversion based on a multi-objective genetic algorithm is also presented. Experiments are conducted for both P and SV waves in order to collect enough data used in the inversion. Calculation is conducted to fulfil the estimation with both the synthetic data and the experimental data. Concluding remarks are presented at the end of the paper.

  17. Estimation of the total number of mast cells in the human umbilical cord. A methodological study

    DEFF Research Database (Denmark)

    Engberg Damsgaard, T M; Windelborg Nielsen, B; Sørensen, Flemming Brandt

    1992-01-01

    The aim of the present study was to estimate the total number of mast cells in the human umbilical cord. Using 50 microns-thick paraffin sections, made from a systematic random sample of umbilical cord, the total number of mast cells per cord was estimated using a combination of the optical...... disector and fractionated sampling. The mast cell of the human umbilical cord was found in Wharton's jelly, most frequently in close proximity to the three blood vessels. No consistent pattern of variation in mast cell numbers from the fetal end of the umbilical cord towards the placenta was seen....... The total number of mast cells found in the umbilical cord was 5,200,000 (median), range 2,800,000-16,800,000 (n = 7), that is 156,000 mast cells per gram umbilical cord (median), range 48,000-267,000. Thus, the umbilical cord constitutes an adequate source of mast cells for further investigation...

  18. Assessment of indicators and collection methodology to estimate nutrient digestibility in buffaloes

    Directory of Open Access Journals (Sweden)

    Luciana Felizardo Pereira Soares

    2011-09-01

    Full Text Available Dry fecal matter production was estimated from neutral detergent indicators on indigestible fiber, indigestible acid detergent fiber, indigestible dry matter, incubated for 144 hours and 288 hours, as well as chromium oxide (Cr2O3 and enriched and purified isolated lignin (LIPE® in two sampling schemes (3 and 5 days on buffaloes. Sample consisted of five castrated animals with average weight of 300 ± 0.6 kg fed on elephant grass cv Cameroon (Pennisetum purpureum. Experimental design consisted of randomized blocks in subdivided plots. Production of dry fecal matter was overestimated when using Cr2O3, indigestible acid detergent fiber 144 hours, indigestible neutral detergent fiber 144 hours, indigestible neutral detergent fiber 288 hours and indigestible dry matter 144 hours, while indigestible acid detergent fiber 288 hours, indigestible dry matter 288 hours and LIPE® did not differ from total collection. The same result was observed for apparent digestibility of nutrients. There was no difference in dry fecal matter production and digestibility between both collection periods of 3 and 5 days, demonstrating that a collection period of three days can be used to estimate dry fecal matter production in buffaloes. A three-day period of sample collection, in order to estimate dry fecal matter production and apparent digestibility coefficients, is therefore recommended. The use of LIPE®, fibers in indigestible acid detergent and indigestible dry matter as indicators, both latter incubated for 288 hours, result in accurate estimates of dry fecal matter production in confined buffaloes, fed on a forage based diet.

  19. Estimating Renewable Energy Economic Potential in the United States: Methodology and Initial Results

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Austin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Beiter, Philipp [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Davidson, Carolyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Denholm, Paul [National Renewable Energy Lab. (NREL), Golden, CO (United States); Melius, Jennifer [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hettinger, Dylan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mulcahy, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Porro, Gian [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-01

    The report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, is defined in this report as the subset of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity.

  20. A smartphone-driven methodology for estimating physical activities and energy expenditure in free living conditions.

    Science.gov (United States)

    Guidoux, Romain; Duclos, Martine; Fleury, Gérard; Lacomme, Philippe; Lamaudière, Nicolas; Manenq, Pierre-Henri; Paris, Ludivine; Ren, Libo; Rousset, Sylvie

    2014-12-01

    This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Methodology to estimate the relative pressure field from noisy experimental velocity data

    International Nuclear Information System (INIS)

    Bolin, C D; Raguin, L G

    2008-01-01

    The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).

  2. Comparison of sampling methodologies and estimation of population parameters for a temporary fish ectoparasite

    Directory of Open Access Journals (Sweden)

    J.M. Artim

    2016-08-01

    Full Text Available Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.

  3. A methodology to estimate greenhouse gases emissions in Life Cycle Inventories of wastewater treatment plants

    International Nuclear Information System (INIS)

    Rodriguez-Garcia, G.; Hospido, A.; Bagley, D.M.; Moreira, M.T.; Feijoo, G.

    2012-01-01

    The main objective of this paper is to present the Direct Emissions Estimation Model (DEEM), a model for the estimation of CO 2 and N 2 O emissions from a wastewater treatment plant (WWTP). This model is consistent with non-specific but widely used models such as AS/AD and ASM no. 1 and presents the benefits of simplicity and application over a common WWTP simulation platform, BioWin®, making it suitable for Life Cycle Assessment and Carbon Footprint studies. Its application in a Spanish WWTP indicates direct N 2 O emissions to be 8 times larger than those associated with electricity use and thus relevant for LCA. CO 2 emissions can be of similar importance to electricity-associated ones provided that 20% of them are of non-biogenic origin. - Highlights: ► A model has been developed for the estimation of GHG emissions in WWTP. ► Model was consistent with both ASM no. 1 and AS/AD. ► N 2 O emissions are 8 times more relevant than the one associated with electricity. ► CO 2 emissions are as important as electricity if 20% of it is non-biogenic.

  4. CSNI Status summary on utilization of best-estimate methodology in safety analysis and licensing

    International Nuclear Information System (INIS)

    1996-10-01

    The PWG 2 Task Group on Thermal Hydraulic System Behavior has discussed the subject of the use of best-estimate codes in the licensing process (codes that model thermal hydraulic processes are important to assessing safety system performance). The Task Group set out to determine the prevailing practices in member countries, concerning safety assessment and safety review of transients affecting the reactor coolant system. A summary of information provided by member countries in response to eleven questions is given: Who is Responsible for Safety Analysis? Who is Responsible for Review and Evaluation of Safety Analysis? Do the Regulations Permit the use of Best-Estimate Codes? What are the Requirements for What Constitutes a Best Estimate Code? What are the Requirements Concerning Code Documentation? What are the Requirements for Review of Code Models and Correlations? What are the Requirements Concerning Code Assessment? What are the Requirements Concerning Initial and Boundary Conditions? What are the Requirements Concerning Operability of Active Equipment? What are the Requirements Concerning Operator Actions?

  5. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology

    Science.gov (United States)

    Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  6. A methodology to estimate greenhouse gases emissions in Life Cycle Inventories of wastewater treatment plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Garcia, G., E-mail: gonzalo.rodriguez.garcia@usc.es [Department of Chemical Engineering, University of Santiago de Compostela, Rua Lope Gomez de Marzoa, S/N, 15782, Santiago de Compostela (Spain); Hospido, A., E-mail: almudena.hospido@usc.es [Department of Chemical Engineering, University of Santiago de Compostela, Rua Lope Gomez de Marzoa, S/N, 15782, Santiago de Compostela (Spain); Bagley, D.M., E-mail: bagley@uwyo.edu [Department of Chemical and Petroleum Engineering, University of Wyoming, 82072 Laramie, WY (United States); Moreira, M.T., E-mail: maite.moreira@usc.es [Department of Chemical Engineering, University of Santiago de Compostela, Rua Lope Gomez de Marzoa, S/N, 15782, Santiago de Compostela (Spain); Feijoo, G., E-mail: gumersindo.feijoo@usc.es [Department of Chemical Engineering, University of Santiago de Compostela, Rua Lope Gomez de Marzoa, S/N, 15782, Santiago de Compostela (Spain)

    2012-11-15

    The main objective of this paper is to present the Direct Emissions Estimation Model (DEEM), a model for the estimation of CO{sub 2} and N{sub 2}O emissions from a wastewater treatment plant (WWTP). This model is consistent with non-specific but widely used models such as AS/AD and ASM no. 1 and presents the benefits of simplicity and application over a common WWTP simulation platform, BioWin Registered-Sign , making it suitable for Life Cycle Assessment and Carbon Footprint studies. Its application in a Spanish WWTP indicates direct N{sub 2}O emissions to be 8 times larger than those associated with electricity use and thus relevant for LCA. CO{sub 2} emissions can be of similar importance to electricity-associated ones provided that 20% of them are of non-biogenic origin. - Highlights: Black-Right-Pointing-Pointer A model has been developed for the estimation of GHG emissions in WWTP. Black-Right-Pointing-Pointer Model was consistent with both ASM no. 1 and AS/AD. Black-Right-Pointing-Pointer N{sub 2}O emissions are 8 times more relevant than the one associated with electricity. Black-Right-Pointing-Pointer CO{sub 2} emissions are as important as electricity if 20% of it is non-biogenic.

  7. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    Science.gov (United States)

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  8. The South African Tuberculosis Care Cascade: Estimated Losses and Methodological Challenges.

    Science.gov (United States)

    Naidoo, Pren; Theron, Grant; Rangaka, Molebogeng X; Chihota, Violet N; Vaughan, Louise; Brey, Zameer O; Pillay, Yogan

    2017-11-06

    While tuberculosis incidence and mortality are declining in South Africa, meeting the goals of the End TB Strategy requires an invigorated programmatic response informed by accurate data. Enumerating the losses at each step in the care cascade enables appropriate targeting of interventions and resources. We estimated the tuberculosis burden; the number and proportion of individuals with tuberculosis who accessed tests, had tuberculosis diagnosed, initiated treatment, and successfully completed treatment for all tuberculosis cases, for those with drug-susceptible tuberculosis (including human immunodeficiency virus (HIV)-coinfected cases) and rifampicin-resistant tuberculosis. Estimates were derived from national electronic tuberculosis register data, laboratory data, and published studies. The overall tuberculosis burden was estimated to be 532005 cases (range, 333760-764480 cases), with successful completion of treatment in 53% of cases. Losses occurred at multiple steps: 5% at test access, 13% at diagnosis, 12% at treatment initiation, and 17% at successful treatment completion. Overall losses were similar among all drug-susceptible cases and those with HIV coinfection (54% and 52%, respectively, successfully completed treatment). Losses were substantially higher among rifampicin- resistant cases, with only 22% successfully completing treatment. Although the vast majority of individuals with tuberculosis engaged the public health system, just over half were successfully treated. Urgent efforts are required to improve implementation of existing policies and protocols to close gaps in tuberculosis diagnosis, treatment initiation, and successful treatment completion. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  9. Estimating dead wood during national forest inventories: a review of inventory methodologies and suggestions for harmonization.

    Science.gov (United States)

    Woodall, Christopher W; Rondeux, Jacques; Verkerk, Pieter J; Ståhl, Göran

    2009-10-01

    Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world's forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.

  10. Estimation of North American population doses resulting from radon-222 release in western United States: methodology

    International Nuclear Information System (INIS)

    Fields, D.E.; Travis, C.C.; Watson, A.P.; McDowell-Boyer, L.M.

    1979-12-01

    The report represents a compilation of computer codes used to estimate potential human exposures and inhalation doses due to unit releases of 222 Rn from uranium milling sites in western United States. The populations considered for potential exposure to risk from 222 Rn and associated daughters are the inhabitants of North America between 20 0 and 60 0 North latitude. The primary function of these codes is to integrate spatially atmospheric radionuclide concentrations with current population data for the geographic area under consideration. It is expected that these codes will be of assistance to anyone interested in assessing nuclear or nonnuclear population exposures over large geographic areas

  11. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    Science.gov (United States)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  12. Hydrologic evaluation methodology for estimating water movement through the unsaturated zone at commercial low-level radioactive waste disposal sites

    International Nuclear Information System (INIS)

    Meyer, P.D.; Rockhold, M.L.; Nichols, W.E.; Gee, G.W.

    1996-01-01

    This report identifies key technical issues related to hydrologic assessment of water flow in the unsaturated zone at low-level radioactive waste (LLW) disposal facilities. In addition, a methodology for incorporating these issues in the performance assessment of proposed LLW disposal facilities is identified and evaluated. The issues discussed fall into four areas: estimating the water balance at a site (i.e., infiltration, runoff, water storage, evapotranspiration, and recharge); analyzing the hydrologic performance of engineered components of a facility; evaluating the application of models to the prediction of facility performance; and estimating the uncertainty in predicted facility performance. To illustrate the application of the methodology, two examples are presented. The first example is of a below ground vault located in a humid environment. The second example looks at a shallow land burial facility located in an arid environment. The examples utilize actual site-specific data and realistic facility designs. The two examples illustrate the issues unique to humid and arid sites as well as the issues common to all LLW sites. Strategies for addressing the analytical difficulties arising in any complex hydrologic evaluation of the unsaturated zone are demonstrated

  13. Hydrologic evaluation methodology for estimating water movement through the unsaturated zone at commercial low-level radioactive waste disposal sites

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, P.D.; Rockhold, M.L.; Nichols, W.E.; Gee, G.W. [Pacific Northwest Lab., Richland, WA (United States)

    1996-01-01

    This report identifies key technical issues related to hydrologic assessment of water flow in the unsaturated zone at low-level radioactive waste (LLW) disposal facilities. In addition, a methodology for incorporating these issues in the performance assessment of proposed LLW disposal facilities is identified and evaluated. The issues discussed fall into four areas: estimating the water balance at a site (i.e., infiltration, runoff, water storage, evapotranspiration, and recharge); analyzing the hydrologic performance of engineered components of a facility; evaluating the application of models to the prediction of facility performance; and estimating the uncertainty in predicted facility performance. To illustrate the application of the methodology, two examples are presented. The first example is of a below ground vault located in a humid environment. The second example looks at a shallow land burial facility located in an arid environment. The examples utilize actual site-specific data and realistic facility designs. The two examples illustrate the issues unique to humid and arid sites as well as the issues common to all LLW sites. Strategies for addressing the analytical difficulties arising in any complex hydrologic evaluation of the unsaturated zone are demonstrated.

  14. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    Science.gov (United States)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  15. Estimation of waste water treatment plant methane emissions: methodology and results from a short campaign

    Science.gov (United States)

    Yver-Kwok, C. E.; Müller, D.; Caldow, C.; Lebegue, B.; Mønster, J. G.; Rella, C. W.; Scheutz, C.; Schmidt, M.; Ramonet, M.; Warneke, T.; Broquet, G.; Ciais, P.

    2013-10-01

    This paper describes different methods to estimate methane emissions at different scales. These methods are applied to a waste water treatment plant (WWTP) located in Valence, France. We show that Fourier Transform Infrared (FTIR) measurements as well as Cavity Ring Down Spectroscopy (CRDS) can be used to measure emissions from the process to the regional scale. To estimate the total emissions, we investigate a tracer release method (using C2H2) and the Radon tracer method (using 222Rn). For process-scale emissions, both tracer release and chamber techniques were used. We show that the tracer release method is suitable to quantify facility- and some process-scale emissions, while the Radon tracer method encompasses not only the treatment station but also a large area around. Thus the Radon tracer method is more representative of the regional emissions around the city. Uncertainties for each method are described. Applying the methods to CH4 emissions, we find that the main source of emissions of the plant was not identified with certainty during this short campaign, although the primary source of emissions is likely to be from solid sludge. Overall, the waste water treatment plant represents a small part (3%) of the methane emissions of the city of Valence and its surroundings,which is in agreement with the national inventories.

  16. Wavelet-Based Methodology for Evolutionary Spectra Estimation of Nonstationary Typhoon Processes

    Directory of Open Access Journals (Sweden)

    Guang-Dong Zhou

    2015-01-01

    Full Text Available Closed-form expressions are proposed to estimate the evolutionary power spectral density (EPSD of nonstationary typhoon processes by employing the wavelet transform. Relying on the definition of the EPSD and the concept of the wavelet transform, wavelet coefficients of a nonstationary typhoon process at a certain time instant are interpreted as the Fourier transform of a new nonstationary oscillatory process, whose modulating function is equal to the modulating function of the nonstationary typhoon process multiplied by the wavelet function in time domain. Then, the EPSD of nonstationary typhoon processes is deduced in a closed form and is formulated as a weighted sum of the squared moduli of time-dependent wavelet functions. The weighted coefficients are frequency-dependent functions defined by the wavelet coefficients of the nonstationary typhoon process and the overlapping area of two shifted wavelets. Compared with the EPSD, defined by a sum of the squared moduli of the wavelets in frequency domain in literature, this paper provides an EPSD estimation method in time domain. The theoretical results are verified by uniformly modulated nonstationary typhoon processes and non-uniformly modulated nonstationary typhoon processes.

  17. Estimating Renewable Energy Economic Potential in the United States. Methodology and Initial Results

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Austin; Beiter, Philipp; Heimiller, Donna; Davidson, Carolyn; Denholm, Paul; Melius, Jennifer; Lopez, Anthony; Hettinger, Dylan; Mulcahy, David; Porro, Gian

    2016-08-01

    This report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, may be defined in several ways. For example, one definition might be expected revenues (based on local market prices) minus generation costs, considered over the expected lifetime of the generation asset. Another definition might be generation costs relative to a benchmark (e.g., a natural gas combined cycle plant) using assumptions of fuel prices, capital cost, and plant efficiency. Economic potential in this report is defined as the subset of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity. The assessment is conducted at a high geospatial resolution (more than 150,000 technology-specific sites in the continental United States) to capture the significant variation in local resource, costs, and revenue potential. This metric can be a useful screening factor for understanding the economic viability of renewable generation technologies at a specific location. In contrast to many common estimates of renewable energy potential, economic potential does not consider market dynamics, customer demand, or most policy drivers that may incent renewable energy generation.

  18. Drought risk assessment under climate change is sensitive to methodological choices for the estimation of evaporative demand.

    Science.gov (United States)

    Dewes, Candida F; Rangwala, Imtiaz; Barsugli, Joseph J; Hobbins, Michael T; Kumar, Sanjiv

    2017-01-01

    Several studies have projected increases in drought severity, extent and duration in many parts of the world under climate change. We examine sources of uncertainty arising from the methodological choices for the assessment of future drought risk in the continental US (CONUS). One such uncertainty is in the climate models' expression of evaporative demand (E0), which is not a direct climate model output but has been traditionally estimated using several different formulations. Here we analyze daily output from two CMIP5 GCMs to evaluate how differences in E0 formulation, treatment of meteorological driving data, choice of GCM, and standardization of time series influence the estimation of E0. These methodological choices yield different assessments of spatio-temporal variability in E0 and different trends in 21st century drought risk. First, we estimate E0 using three widely used E0 formulations: Penman-Monteith; Hargreaves-Samani; and Priestley-Taylor. Our analysis, which primarily focuses on the May-September warm-season period, shows that E0 climatology and its spatial pattern differ substantially between these three formulations. Overall, we find higher magnitudes of E0 and its interannual variability using Penman-Monteith, in particular for regions like the Great Plains and southwestern US where E0 is strongly influenced by variations in wind and relative humidity. When examining projected changes in E0 during the 21st century, there are also large differences among the three formulations, particularly the Penman-Monteith relative to the other two formulations. The 21st century E0 trends, particularly in percent change and standardized anomalies of E0, are found to be sensitive to the long-term mean value and the amplitude of interannual variability, i.e. if the magnitude of E0 and its interannual variability are relatively low for a particular E0 formulation, then the normalized or standardized 21st century trend based on that formulation is amplified

  19. Investigation of Weather Radar Quantitative Precipitation Estimation Methodologies in Complex Orography

    Directory of Open Access Journals (Sweden)

    Mario Montopoli

    2017-02-01

    Full Text Available Near surface quantitative precipitation estimation (QPE from weather radar measurements is an important task for feeding hydrological models, limiting the impact of severe rain events at the ground as well as aiding validation studies of satellite-based rain products. To date, several works have analyzed the performance of various QPE algorithms using actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization radar variables not only to ensure a good level of data quality but also as a direct input to rain estimation equations. One of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution, which affects all the acquired radar variables as well as estimated rain rates at different levels. This is particularly impactful in mountainous areas, where the sampled altitudes are likely several hundred meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested in a complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that use the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered. In that case, all the radar variables used in the rain estimation process should be consistently extrapolated at the surface to try and maintain the correlations among them. To avoid facing such a complexity, especially with a view to operational implementation, we propose looking at the features of the vertical profile of rain (VPR, i.e., after performing the rain estimation. This procedure allows characterization of a single variable (i.e., rain when dealing with

  20. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    Science.gov (United States)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  1. A human error probability estimate methodology based on fuzzy inference and expert judgment on nuclear plants

    International Nuclear Information System (INIS)

    Nascimento, C.S. do; Mesquita, R.N. de

    2009-01-01

    Recent studies point human error as an important factor for many industrial and nuclear accidents: Three Mile Island (1979), Bhopal (1984), Chernobyl and Challenger (1986) are classical examples. Human contribution to these accidents may be better understood and analyzed by using Human Reliability Analysis (HRA), which has being taken as an essential part on Probabilistic Safety Analysis (PSA) of nuclear plants. Both HRA and PSA depend on Human Error Probability (HEP) for a quantitative analysis. These probabilities are extremely affected by the Performance Shaping Factors (PSF), which has a direct effect on human behavior and thus shape HEP according with specific environment conditions and personal individual characteristics which are responsible for these actions. This PSF dependence raises a great problem on data availability as turn these scarcely existent database too much generic or too much specific. Besides this, most of nuclear plants do not keep historical records of human error occurrences. Therefore, in order to overcome this occasional data shortage, a methodology based on Fuzzy Inference and expert judgment was employed in this paper in order to determine human error occurrence probabilities and to evaluate PSF's on performed actions by operators in a nuclear power plant (IEA-R1 nuclear reactor). Obtained HEP values were compared with reference tabled data used on current literature in order to show method coherence and valid approach. This comparison leads to a conclusion that this work results are able to be employed both on HRA and PSA enabling efficient prospection of plant safety conditions, operational procedures and local working conditions potential improvements (author)

  2. On the issue of methodology of estimating the regional competitive advantages by types of activity

    Directory of Open Access Journals (Sweden)

    Azat Rashitovich Safiullin

    2015-06-01

    Full Text Available Objective to develop the research methodology of the competitive position of territories by types of economic activities to define the relevant types of economic activities and to diagnose their competitive advantages. Methods a differentiated approach to the analysis of competitiveness based on matrix models of competitive positioning which allows to identify the relevant types of economic activities of the territory. This approach determined the choice of specific research methods dialectics abstraction systematic logical structural comparative and statistical. The application of the above methods helped to ensure the validity of the analysis results theoretical and practical conclusions. Results in the study were used the experience and knowledge gained during an earlier undertaken series of projects relating to the assessment of economy efficiency of Tatarstan Republic in 20052011.Previous reports dealt with the change of the selected industries positions of the Republic the structure and dynamics of the competitive position of these industries compared to the leading Russian regions. A distinctive feature of the research results presented in this article is a comparative analysis of competitive advantages of economic activities of Tatarstan Republic based on the matrix model of competitive positioning. Scientific novelty for the first time the matrix model of diagnostics of competitive advantages of the territory by the type of economic activity was proposed which allows to identify the priority industrial portfolio and provide targeted management actions to enhance its investment attractiveness. Practical value the main provisions and conclusions of the article can be used by legislative and executive authorities of the Russian Federation business community research institutions and organizations to develop strategies and programs of socioeconomic development and territorial planning schemes priority directions of industrial and investment

  3. Estimating the cost-savings associated with bundling maternal and child health interventions: a proposed methodology.

    Science.gov (United States)

    Adesina, Adebiyi; Bollinger, Lori A

    2013-01-01

    There is a pressing need to include cost data in the Lives Saved Tool (LiST). This paper proposes a method that combines data from both the WHO CHOosing Interventions that are Cost-Effective (CHOICE) database and the OneHealth Tool (OHT) to develop unit costs for delivering child and maternal health services, both alone and bundled. First, a translog cost function is estimated to calculate factor shares of personnel, consumables, other direct (variable or recurrent costs excluding personnel and consumables) and indirect (capital or investment) costs. Primary source facility level data from Kenya, Namibia, South Africa, Uganda, Zambia and Zimbabwe are utilized, with separate analyses for hospitals and health centres. Second, the resulting other-direct and indirect factor shares are applied to country unit costs from the WHO CHOICE unit cost database to calculate those portions of unit cost. Third, the remainder of the costs is calculated using default data from the OHT. Fourth, we calculate the effect of bundling services by assuming that a LiST intervention visit takes an average of 20 minutes when delivered alone but only incremental time in addition to the basic visit when delivered in a bundle. Personnel costs account for the greatest share of costs for both hospitals and health centres at 50% and 38%, respectively. The percentages differ between hospitals and health centres for consumables (21% versus 17%), other direct (7.5% versus 6.75%), and indirect (22% versus 23%) costs. Combining the other-direct and indirect factor shares with the WHO CHOICE database and the other costs from OHT provides a comprehensive cost estimate of LiST interventions. Finally, the cost of six recommended antenatal care (ANC) interventions is $69.76 when delivered alone, but $61.18 when delivered as a bundle, a savings of $8.58 (12.2%). This paper proposes a method for estimating a comprehensive cost of providing child and maternal health interventions by combining labor

  4. SU-F-T-687: Comparison of SPECT/CT-Based Methodologies for Estimating Lung Dose from Y-90 Radioembolization

    Energy Technology Data Exchange (ETDEWEB)

    Kost, S; Yu, N [Cleveland Clinic, Cleveland, OH (United States); Lin, S [Cleveland State University, Cleveland, OH (United States)

    2016-06-15

    Purpose: To compare mean lung dose (MLD) estimates from 99mTc macroaggregated albumin (MAA) SPECT/CT using two published methodologies for patients treated with {sup 90}Y radioembolization for liver cancer. Methods: MLD was estimated retrospectively using two methodologies for 40 patients from SPECT/CT images of 99mTc-MAA administered prior to radioembolization. In these two methods, lung shunt fractions (LSFs) were calculated as the ratio of scanned lung activity to the activity in the entire scan volume or to the sum of activity in the lung and liver respectively. Misregistration of liver activity into the lungs during SPECT acquisition was overcome by excluding lung counts within either 2 or 1.5 cm of the diaphragm apex respectively. Patient lung density was assumed to be 0.3 g/cm{sup 3} or derived from CT densitovolumetry respectively. Results from both approaches were compared to MLD determined by planar scintigraphy (PS). The effect of patient size on the difference between MLD from PS and SPECT/CT was also investigated. Results: Lung density from CT densitovolumetry is not different from the reference density (p = 0.68). The second method resulted in lung dose of an average 1.5 times larger lung dose compared to the first method; however the difference between the means of the two estimates was not significant (p = 0.07). Lung dose from both methods were statistically different from those estimated from 2D PS (p < 0.001). There was no correlation between patient size and the difference between MLD from PS and both SPECT/CT methods (r < 0.22, p > 0.17). Conclusion: There is no statistically significant difference between MLD estimated from the two techniques. Both methods are statistically different from conventional PS, with PS overestimating dose by a factor of three or larger. The difference between lung doses estimated from 2D planar or 3D SPECT/CT is not dependent on patient size.

  5. Methodology for estimation of 32P in bioassay samples by Cerenkov counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Sawant, Pramilla D.; Yadav, R.K.B.; Rao, D.D.

    2016-01-01

    Radioactive phosphorus ( 32 P) as phosphate is used to effectively reduce bone pain in terminal cancer patients. Several hospitals in India carry out this palliative care procedure on a regular basis. Thus, production as well as synthesis of 32 P compounds has increased over the years to meet this requirement. Monitoring of radiation workers handling 32 P compounds is important for further strengthening of radiological protection program at processing facility. 32 P being a pure beta emitter (β max = 1.71 MeV, t 1/2 = 14.3 d), bioassay is the preferred individual monitoring technique. Method standardized at Bioassay Lab, Trombay, includes estimation of 32 P in urine by co-precipitation with ammonium phosphomolybdate (AMP) followed by gross beta counting. In the present study, feasibility of Cerenkov counting for detection of 32 P in bioassay samples was explored and the results obtained were compared with the gross beta counting technique

  6. Phenotypic variance, plasticity and heritability estimates of critical thermal limits depend on methodological context

    DEFF Research Database (Denmark)

    Chown, Steven L.; Jumbam, Keafon R.; Sørensen, Jesper Givskov

    2009-01-01

    used during assessments of critical thermal limits to activity. To date, the focus of work has almost exclusively been on the effects of rate variation on mean values of the critical limits. 2.  If the rate of temperature change used in an experimental trial affects not only the trait mean but also its...... this is the case for critical thermal limits using a population of the model species Drosophila melanogaster and the invasive ant species Linepithema humile. 4.  We found that effects of the different rates of temperature change are variable among traits and species. However, in general, different rates...... of temperature change resulted in different phenotypic variances and different estimates of heritability, presuming that genetic variance remains constant. We also found that different rates resulted in different conclusions regarding the responses of the species to acclimation, especially in the case of L...

  7. Data Processing Procedures and Methodology for Estimating Trip Distances for the 1995 American Travel Survey (ATS)

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, H.-L.; Rollow, J.

    2000-05-01

    The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.

  8. New NRC methodology for estimating biological risks from exposure to ionizing radiation

    International Nuclear Information System (INIS)

    Willis, C.A; Branagan, E.F.

    1983-01-01

    In licensing commercial nuclear power reactors, in the US Nuclear Regulatory Commission considers the potential health effects from the release of radioactive effluents. This entails reliance on epidemiological study results and interpretations. The BEIR III report is a principal source of information but as newer information becomes available, it is desirable to include this in NRC models. To facilitate both the estimation of potential health effects and the evaluation of epidemiological study results, the NRC has supported the development of a new computer code (SPAHR). This new code utilizes much more comprehensive demographic models than did the previously used codes (CAIRD and BIERMOD). SPAHR can accommodate variations in all the principal demographic statistics such as age distribution, age-specific computing risks, and sex ratio. Also SPAHR can project effects over a number of generations

  9. Selection of meteorological parameters affecting rainfall estimation using neuro-fuzzy computing methodology

    Science.gov (United States)

    Hashim, Roslan; Roy, Chandrabhushan; Motamedi, Shervin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Lee, Siew Cheng

    2016-05-01

    Rainfall is a complex atmospheric process that varies over time and space. Researchers have used various empirical and numerical methods to enhance estimation of rainfall intensity. We developed a novel prediction model in this study, with the emphasis on accuracy to identify the most significant meteorological parameters having effect on rainfall. For this, we used five input parameters: wet day frequency (dwet), vapor pressure (e̅a), and maximum and minimum air temperatures (Tmax and Tmin) as well as cloud cover (cc). The data were obtained from the Indian Meteorological Department for the Patna city, Bihar, India. Further, a type of soft-computing method, known as the adaptive-neuro-fuzzy inference system (ANFIS), was applied to the available data. In this respect, the observation data from 1901 to 2000 were employed for testing, validating, and estimating monthly rainfall via the simulated model. In addition, the ANFIS process for variable selection was implemented to detect the predominant variables affecting the rainfall prediction. Finally, the performance of the model was compared to other soft-computing approaches, including the artificial neural network (ANN), support vector machine (SVM), extreme learning machine (ELM), and genetic programming (GP). The results revealed that ANN, ELM, ANFIS, SVM, and GP had R2 of 0.9531, 0.9572, 0.9764, 0.9525, and 0.9526, respectively. Therefore, we conclude that the ANFIS is the best method among all to predict monthly rainfall. Moreover, dwet was found to be the most influential parameter for rainfall prediction, and the best predictor of accuracy. This study also identified sets of two and three meteorological parameters that show the best predictions.

  10. A Methodology to Estimate Ores Work Index Values, Using Miduk Copper Mine Sample

    Directory of Open Access Journals (Sweden)

    Mohammad Noaparast

    2012-12-01

    Full Text Available It is always attempted to reduce the costs of comminution in mineral processing plants. One of thedifficulties in size reduction section is not to be designed properly. The key factor to design size reductionunits such as crushers and grinding mills, is ore’s work index. The work index, wi, presents the oregrindability, and is used in Bond formula to calculate the required energy. Bond has defined a specificrelationship between some parameters which is applied to calculate wi, which are control screen, fineparticles produced, feed and product d80.In this research work, a high grade copper sample from Miduk copper concentrator was prepared, and itswork index values were experimentally estimated, using different control screens, 600, 425, 212, 150, 106and 75 microns. The obtained results from the tests showed two different behaviors in fine production.According to these two trends the required models were then defined to present the fine mass calculationusing control screen. In next step, an equation was presented in order to calculate Miduk copper ore workindex for any size. In addition to verify the model creditability, a test using 300 microns control screenwas performed and its result was compared with calculated ones using defined model, which showed agood fit. Finally the experimental and calculated values were compared and their relative error was equalto 4.11% which is an indication of good fit for the results.

  11. Estimates of future discharges of the river Rhine using two scenario methodologies: direct versus delta approach

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Simulations with a hydrological model for the river Rhine for the present (1960–1989 and a projected future (2070–2099 climate are discussed. The hydrological model (RhineFlow is driven by meteorological data from a 90-years (ensemble of three 30-years simulation with the HadRM3H regional climate model for both present-day and future climate (A2 emission scenario. Simulation of present-day discharges is realistic provided that (1 the HadRM3H temperature and precipitation are corrected for biases, and (2 the potential evapotranspiration is derived from temperature only. Different methods are used to simulate discharges for the future climate: one is based on the direct model output of the future climate run (direct approach, while the other is based on perturbation of the present-day HadRM3H time series (delta approach. Both methods predict a similar response in the mean annual discharge, an increase of 30% in winter and a decrease of 40% in summer. However, predictions of extreme flows differ significantly, with increases of 10% in flows with a return period of 100 years in the direct approach and approximately 30% in the delta approach. A bootstrap method is used to estimate the uncertainties related to the sample size (number of years simulated in predicting changes in extreme flows.

  12. Estimates of fluid pressure and tectonic stress in hydrothermal/volcanic areas:a methodological approach

    Directory of Open Access Journals (Sweden)

    G. Vilardo

    2005-06-01

    Full Text Available An analytical approach to estimate the relative contribution of the fluid pressure and tectonic stress in hydrothermal/ volcanic areas is proposed assuming a Coulomb criterion of failure. The analytical procedure requires the coefficient of internal friction, cohesion, rock density, and thickness of overburden to be known from geological data. In addition, the orientation of the principal stress axes and the stress ratio must be determined from the inversion of fault-slip or seismic data (focal mechanisms. At first, the stress magnitude is calculated assuming that faulting occurs in 'dry' conditions (fluid pressure=0. In a second step, the fluid pressure is introduced performing a grid search over the orientation of 1 fault planes that slip by shear failure or 2 cracks that open under different values of fluid pressure and calculating the consistency with the observed fault planes (i.e. strike and dip of faults, cracks, nodal planes from focal mechanisms. The analytical method is applied using fault-slip data from the Solfatara volcano (Campi Flegrei, Italy and seismic data (focal mechanisms from the Vesuvius volcano (Italy. In these areas, the fluid pressure required to activate faults (shear fractures and cracks (open fractures is calculated. At Solfatara, the ratio between the fluid pressure and the vertical stress ?is very low for faults ( ?=0.16 and relatively high for cracks ( ?=0.5. At Vesuvius, ?=0.6. Limits and uncertainties of the method are also discussed.

  13. Methodology and estimation of the welfare impact of energy reforms on households in Azerbaijan

    Science.gov (United States)

    Klytchnikova, Irina

    This dissertation develops a new approach that enables policy-makers to analyze welfare gains from improvements in the quality of infrastructure services in developing countries where data are limited and supply is subject to interruptions. An application of the proposed model in the former Soviet Republic of Azerbaijan demonstrates how this approach can be used in welfare assessment of energy sector reforms. The planned reforms in Azerbaijan include a set of measures that will result in a significant improvement in supply reliability, accompanied by a significant increase in the prices of energy services so that they reach the cost recovery level. Currently, households in rural areas receive electricity and gas for only a few hours a day because of a severe deterioration of the energy infrastructure following the collapse of the Soviet Union. The reforms that have recently been initiated will have far-reaching poverty and distributional consequences for the country as they result in an improvement in supply reliability and an increase in energy prices. The new model of intermittent supply developed in this dissertation is based on the household production function approach and draws on previous research in the energy reliability literature. Since modern energy sources (network gas and electricity) in Azerbaijan are cleaner and cheaper than the traditional fuels (fuel wood, etc.), households choose modern fuels whenever they are available. During outages, they rely on traditional fuels. Theoretical welfare measures are derived from a system of fuel demands that takes into account the intermittent availability of energy sources. The model is estimated with the data from the Azerbaijan Household Energy Survey, implemented by the World Bank in December 2003/January 2004. This survey includes an innovative contingent behavior module in which the respondents were asked about their energy consumption patterns in specified reform scenarios. Estimation results strongly

  14. On the plurality of (methodological worlds: Estimating the analytic flexibility of fMRI experiments.

    Directory of Open Access Journals (Sweden)

    Joshua eCarp

    2012-10-01

    Full Text Available How likely are published findings in the functional neuroimaging literature to be false? According to a recent mathematical model, the potential for false positives increases with the flexibility of analysis methods. Functional MRI (fMRI experiments can be analyzed using a large number of commonly used tools, with little consensus on how, when, or whether to apply each one. This situation may lead to substantial variability in analysis outcomes. Thus, the present study sought to estimate the flexibility of neuroimaging analysis by submitting a single event-related fMRI experiment to a large number of unique analysis procedures. Ten analysis steps for which multiple strategies appear in the literature were identified, and two to four strategies were enumerated for each step. Considering all possible combinations of these strategies yielded 6,912 unique analysis pipelines. Activation maps from each pipeline were corrected for multiple comparisons using five thresholding approaches, yielding 34,560 significance maps. While some outcomes were relatively consistent across pipelines, others showed substantial methods-related variability in activation strength, location, and extent. Some analysis decisions contributed to this variability more than others, and different decisions were associated with distinct patterns of variability across the brain. Qualitative outcomes also varied with analysis parameters: many contrasts yielded significant activation under some pipelines but not others. Altogether, these results reveal considerable flexibility in the analysis of fMRI experiments. This observation, when combined with mathematical simulations linking analytic flexibility with elevated false positive rates, suggests that false positive results may be more prevalent than expected in the literature. This risk of inflated false positive rates may be mitigated by constraining the flexibility of analytic choices or by abstaining from selective analysis

  15. A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models

    Directory of Open Access Journals (Sweden)

    Francesco Alessandro Massucci

    2013-09-01

    Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.

  16. Assessment of microalgae biodiesel fuels using a fuel property estimation methodology

    Energy Technology Data Exchange (ETDEWEB)

    Torrens, Jonas Colen Ladeia; Vargas, Jose Viriato Coelho; Mariano, Andre Bellin [Center for Research and Development of Sustainable Energy. Universidade Federal do Parana, Curitiba, PR (Brazil)

    2010-07-01

    Recently, depleting supplies of petroleum and the concerns about global warming are drawing attention to alternative sources of energy. In this context, advanced biofuels, derived from non edible superior plants and microorganisms, are presented as promising options for the transportation sector. Biodiesel, which is the most prominent alternative fuel for compression ignition engines, have a large number as potential feedstock, such as plants (e.g., soybean, canola, palm) and microorganism (i.e., microalgae, yeast, fungi and bacterium). In order to determine their potential, most studies focus on the economic viability, but few discuss the technical viability of producing high quality fuels from such feedstock. Since the fuel properties depend on the composition of the parent oil, and considering the variability of the fatty acid profile found in these organisms, it is clear that the fuels derived may present undesirable properties, e.g., high viscosity, low cetane number, low oxidative stability and poor cold flow properties. Therefore, it is very important to develop ways of analysing the fuel quality prior to production, specially considering the high cost of producing and testing several varieties of plants and microorganisms. In this aim, this work presents the use of fuel properties estimation methods on the assessment of the density, viscosity, cetane number and cold filter plugging point of several microalgae derived biofuels, comparing then to more conventional biodiesel fuels. The information gathered with these methods helps on the selection of species and cultivation parameters, which have a high impact on the derived fuel quality, and have been successfully employed on the Center for Research and Development of Sustainable Energy. The results demonstrate that some species of microalgae have the potential to produce high quality biodiesel if cultivated with optimised conditions, associated with the possibility of obtaining valuable long chain

  17. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks

    International Nuclear Information System (INIS)

    Elliott Campbell, J.; Moen, Jeremie C.; Ney, Richard A.; Schnoor, Jerald L.

    2008-01-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively. - Large differences in estimates of soil organic carbon stocks and annual changes in stocks for Wisconsin forestlands indicate a need for validation from forthcoming forest surveys

  18. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    Science.gov (United States)

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Preliminary methodological proposal for estimating environmental flows in projects approved by the ministry of environment and sustainable development (MADS), Colombia

    International Nuclear Information System (INIS)

    Pinilla Agudelo, Gabriel A; Rodriguez Sandoval, Erasmo A; Camacho Botero, Luis A

    2014-01-01

    A methodological proposal for estimating environmental flows in large projects approved by Agencia Nacional de Licencias Ambientales (ANLA) in Colombian rivers was developed. The project is the result of an agreement between the MADS and the Universidad Nacional de Colombia, Bogota (UNC). The proposed method begins with an evaluation of hydrological criteria, continues with a hydraulic and water quality validation, and follows with the determination of habitat integrity. This is an iterative process that compares conditions before and after the project construction and allows to obtain the magnitude of a monthly flow that, besides preserving the ecological functions of the river, guarantees the water uses downstream. Regarding to the biotic component, the proposal includes the establishment and monitoring of biotic integrity indices for four aquatic communities (periphyton, macro invertebrates, riparian vegetation, and fish). The effects that flow reduction may produce in the medium and long term can be assessed by these indices. We present the results of applying the methodology to several projects licensed by the MADS.

  20. Find-rate methodology and resource base estimates of the Hydrocarbon Supply Model (1990 update). Topical report

    International Nuclear Information System (INIS)

    Woods, T.

    1991-02-01

    The Hydrocarbon Supply Model is used to develop long-term trends in Lower-48 gas production and costs. The model utilizes historical find-rate patterns to predict the discovery rate and size distribution of future oil and gas field discoveries. The report documents the methodologies used to quantify historical oil and gas field find-rates and to project those discovery patterns for future drilling. It also explains the theoretical foundations for the find-rate approach. The new field and reserve growth resource base is documented and compared to other published estimates. The report has six sections. Section 1 provides background information and an overview of the model. Sections 2, 3, and 4 describe the theoretical foundations of the model, the databases, and specific techniques used. Section 5 presents the new field resource base by region and depth. Section 6 documents the reserve growth model components

  1. Methodology and data used for estimating the complex-wide impacts of alternative environmental restoration clean-up goals

    International Nuclear Information System (INIS)

    Shay, M.R.; Short, S.M.; Stiles, D.L.

    1994-03-01

    This paper describes the methodologies and data used for estimating the complex-wide impacts of alternative strategies for conducting remediation of all DOE sites and facilities, but does not address issues relating to Waste Management capabilities. Clean-up strategies and their corresponding goals for contaminated media may be driven by concentration-based regulatory standards, land-use standards (e.g., residential, industrial, wild life reserve, or totally restricted), risk-based standards, or other standards determined through stakeholder input. Strategies implemented to achieve these goals usually require the deployment of (a) clean-up technologies to destroy, remove, or contain the contaminants of concern; (b) institutional controls to prevent potential receptors from coming into contact with the contaminants; or (c) a combination of the above

  2. Simple methodologies to estimate the energy amount stored in a tree due to an explosive seed dispersal mechanism

    Science.gov (United States)

    do Carmo, Eduardo; Goncalves Hönnicke, Marcelo

    2018-05-01

    There are different forms to introduce/illustrate the energy concepts for the basic physics students. The explosive seed dispersal mechanism found in a variety of trees could be one of them. Sibipiruna trees carry out fruits (pods) who show such an explosive mechanism. During the explosion, the pods throw out seeds several meters away. In this manuscript we show simple methodologies to estimate the energy amount stored in the Sibipiruna tree due to such a process. Two different physics approaches were used to carry out this study: by monitoring indoor and in situ the explosive seed dispersal mechanism and by measuring the elastic constant of the pod shell. An energy of the order of kJ was found to be stored in a single tree due to such an explosive mechanism.

  3. Developing a Cost Model and Methodology to Estimate Capital Costs for Thermal Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Glatzmaier, G.

    2011-12-01

    This report provides an update on the previous cost model for thermal energy storage (TES) systems. The update allows NREL to estimate the costs of such systems that are compatible with the higher operating temperatures associated with advanced power cycles. The goal of the Department of Energy (DOE) Solar Energy Technology Program is to develop solar technologies that can make a significant contribution to the United States domestic energy supply. The recent DOE SunShot Initiative sets a very aggressive cost goal to reach a Levelized Cost of Energy (LCOE) of 6 cents/kWh by 2020 with no incentives or credits for all solar-to-electricity technologies.1 As this goal is reached, the share of utility power generation that is provided by renewable energy sources is expected to increase dramatically. Because Concentrating Solar Power (CSP) is currently the only renewable technology that is capable of integrating cost-effective energy storage, it is positioned to play a key role in providing renewable, dispatchable power to utilities as the share of power generation from renewable sources increases. Because of this role, future CSP plants will likely have as much as 15 hours of Thermal Energy Storage (TES) included in their design and operation. As such, the cost and performance of the TES system is critical to meeting the SunShot goal for solar technologies. The cost of electricity from a CSP plant depends strongly on its overall efficiency, which is a product of two components - the collection and conversion efficiencies. The collection efficiency determines the portion of incident solar energy that is captured as high-temperature thermal energy. The conversion efficiency determines the portion of thermal energy that is converted to electricity. The operating temperature at which the overall efficiency reaches its maximum depends on many factors, including material properties of the CSP plant components. Increasing the operating temperature of the power generation

  4. Combining tracer flux ratio methodology with low-flying aircraft measurements to estimate dairy farm CH4 emissions

    Science.gov (United States)

    Daube, C.; Conley, S.; Faloona, I. C.; Yacovitch, T. I.; Roscioli, J. R.; Morris, M.; Curry, J.; Arndt, C.; Herndon, S. C.

    2017-12-01

    Livestock activity, enteric fermentation of feed and anaerobic digestion of waste, contributes significantly to the methane budget of the United States (EPA, 2016). Studies question the reported magnitude of these methane sources (Miller et. al., 2013), calling for more detailed research of agricultural animals (Hristov, 2014). Tracer flux ratio is an attractive experimental method to bring to this problem because it does not rely on estimates of atmospheric dispersion. Collection of data occurred during one week at two dairy farms in central California (June, 2016). Each farm varied in size, layout, head count, and general operation. The tracer flux ratio method involves releasing ethane on-site with a known flow rate to serve as a tracer gas. Downwind mixed enhancements in ethane (from the tracer) and methane (from the dairy) were measured, and their ratio used to infer the unknown methane emission rate from the farm. An instrumented van drove transects downwind of each farm on public roads while tracer gases were released on-site, employing the tracer flux ratio methodology to assess simultaneous methane and tracer gas plumes. Flying circles around each farm, a small instrumented aircraft made measurements to perform a mass balance evaluation of methane gas. In the course of these two different methane quantification techniques, we were able to validate yet a third method: tracer flux ratio measured via aircraft. Ground-based tracer release rates were applied to the aircraft-observed methane-to-ethane ratios, yielding whole-site methane emission rates. Never before has the tracer flux ratio method been executed with aircraft measurements. Estimates from this new application closely resemble results from the standard ground-based technique to within their respective uncertainties. Incorporating this new dimension to the tracer flux ratio methodology provides additional context for local plume dynamics and validation of both ground and flight-based data.

  5. A new time-series methodology for estimating relationships between elderly frailty, remaining life expectancy, and ambient air quality.

    Science.gov (United States)

    Murray, Christian J; Lipfert, Frederick W

    2012-01-01

    Many publications estimate short-term air pollution-mortality risks, but few estimate the associated changes in life-expectancies. We present a new methodology for analyzing time series of health effects, in which prior frailty is assumed to precede short-term elderly nontraumatic mortality. The model is based on a subpopulation of frail individuals whose entries and exits (deaths) are functions of daily and lagged environmental conditions: ambient temperature/season, airborne particles, and ozone. This frail susceptible population is unknown; its fluctuations cannot be observed but are estimated using maximum-likelihood methods with the Kalman filter. We used an existing 14-y set of daily data to illustrate the model and then tested the assumption of prior frailty with a new generalized model that estimates the portion of the daily death count allocated to nonfrail individuals. In this demonstration dataset, new entries into the high-risk pool are associated with lower ambient temperatures and higher concentrations of particulate matter and ozone. Accounting for these effects on antecedent frailty reduces this at-risk population, yielding frail life expectancies of 5-7 days. Associations between environmental factors and entries to the at-risk pool are about twice as strong as for mortality. Nonfrail elderly deaths are seen to make only small contributions. This new model predicts a small short-lived frail population-at-risk that is stable over a wide range of environmental conditions. The predicted effects of pollution on new entries and deaths are robust and consistent with conventional morbidity/mortality times-series studies. We recommend model verification using other suitable datasets.

  6. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    Science.gov (United States)

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (page than CMAP amplitude ( 0.5002 and 0.4142, respectively pphisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  7. Evaluation of the conservativeness of the methodology for estimating earthquake-induced movements of fractures intersecting canisters

    International Nuclear Information System (INIS)

    La Pointe, Paul R.; Cladouhos, Trenton T.; Outters, Nils; Follin, Sven

    2000-04-01

    This study evaluates the parameter sensitivity and the conservativeness of the methodology outlined in TR 99-03. Sensitivity analysis focuses on understanding how variability in input parameter values impacts the calculated fracture displacements. These studies clarify what parameters play the greatest role in fracture movements, and help define critical values of these parameters in terms of canister failures. The thresholds or intervals of values that lead to a certain level of canister failure calculated in this study could be useful for evaluating future candidate sites. Key parameters include: 1. magnitude/frequency of earthquakes; 2. the distance of the earthquake from the canisters; 3. the size and aspect ratio of fractures intersecting canisters; and 4. the orientation of the fractures. The results of this study show that distance and earthquake magnitude are the most important factors, followed by fracture size. Fracture orientation is much less important. Regression relations were developed to predict induced fracture slip as a function of distance and either earthquake magnitude or slip on the earthquake fault. These regression relations were validated by using them to estimate the number of canister failures due to single damaging earthquakes at Aberg, and comparing these estimates with those presented in TR 99-03. The methodology described in TR 99-03 employs several conservative simplifications in order to devise a numerically feasible method to estimate fracture movements due to earthquakes outside of the repository over the next 100,000 years. These simplifications include: 1. fractures are assumed to be frictionless and cohesionless; 2. all energy transmitted to the fracture by the earthquake is assumed to produce elastic deformation of the fracture; no energy is diverted into fracture propagation; and 3. shielding effects of other fractures between the earthquake and the fracture are neglected. The numerical modeling effectively assumes that the

  8. Evaluation of the conservativeness of the methodology for estimating earthquake-induced movements of fractures intersecting canisters

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R.; Cladouhos, Trenton T. [Golder Associates Inc., Las Vegas, NV (United States); Outters, Nils; Follin, Sven [Golder Grundteknik KB, Stockholm (Sweden)

    2000-04-01

    This study evaluates the parameter sensitivity and the conservativeness of the methodology outlined in TR 99-03. Sensitivity analysis focuses on understanding how variability in input parameter values impacts the calculated fracture displacements. These studies clarify what parameters play the greatest role in fracture movements, and help define critical values of these parameters in terms of canister failures. The thresholds or intervals of values that lead to a certain level of canister failure calculated in this study could be useful for evaluating future candidate sites. Key parameters include: 1. magnitude/frequency of earthquakes; 2. the distance of the earthquake from the canisters; 3. the size and aspect ratio of fractures intersecting canisters; and 4. the orientation of the fractures. The results of this study show that distance and earthquake magnitude are the most important factors, followed by fracture size. Fracture orientation is much less important. Regression relations were developed to predict induced fracture slip as a function of distance and either earthquake magnitude or slip on the earthquake fault. These regression relations were validated by using them to estimate the number of canister failures due to single damaging earthquakes at Aberg, and comparing these estimates with those presented in TR 99-03. The methodology described in TR 99-03 employs several conservative simplifications in order to devise a numerically feasible method to estimate fracture movements due to earthquakes outside of the repository over the next 100,000 years. These simplifications include: 1. fractures are assumed to be frictionless and cohesionless; 2. all energy transmitted to the fracture by the earthquake is assumed to produce elastic deformation of the fracture; no energy is diverted into fracture propagation; and 3. shielding effects of other fractures between the earthquake and the fracture are neglected. The numerical modeling effectively assumes that the

  9. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  10. Experimental Methodology for Estimation of Local Heat Fluxes and Burning Rates in Steady Laminar Boundary Layer Diffusion Flames.

    Science.gov (United States)

    Singh, Ajay V; Gollner, Michael J

    2016-06-01

    Modeling the realistic burning behavior of condensed-phase fuels has remained out of reach, in part because of an inability to resolve the complex interactions occurring at the interface between gas-phase flames and condensed-phase fuels. The current research provides a technique to explore the dynamic relationship between a combustible condensed fuel surface and gas-phase flames in laminar boundary layers. Experiments have previously been conducted in both forced and free convective environments over both solid and liquid fuels. A unique methodology, based on the Reynolds Analogy, was used to estimate local mass burning rates and flame heat fluxes for these laminar boundary layer diffusion flames utilizing local temperature gradients at the fuel surface. Local mass burning rates and convective and radiative heat feedback from the flames were measured in both the pyrolysis and plume regions by using temperature gradients mapped near the wall by a two-axis traverse system. These experiments are time-consuming and can be challenging to design as the condensed fuel surface burns steadily for only a limited period of time following ignition. The temperature profiles near the fuel surface need to be mapped during steady burning of a condensed fuel surface at a very high spatial resolution in order to capture reasonable estimates of local temperature gradients. Careful corrections for radiative heat losses from the thermocouples are also essential for accurate measurements. For these reasons, the whole experimental setup needs to be automated with a computer-controlled traverse mechanism, eliminating most errors due to positioning of a micro-thermocouple. An outline of steps to reproducibly capture near-wall temperature gradients and use them to assess local burning rates and heat fluxes is provided.

  11. Methodical approaches to value assessment and determination of the capitalization level of high-rise construction

    Science.gov (United States)

    Smirnov, Vitaly; Dashkov, Leonid; Gorshkov, Roman; Burova, Olga; Romanova, Alina

    2018-03-01

    The article presents the analysis of the methodological approaches to cost estimation and determination of the capitalization level of high-rise construction objects. Factors determining the value of real estate were considered, three main approaches for estimating the value of real estate objects are given. The main methods of capitalization estimation were analyzed, the most reasonable method for determining the level of capitalization of high-rise buildings was proposed. In order to increase the value of real estate objects, the author proposes measures that enable to increase significantly the capitalization of the enterprise through more efficient use of intangible assets and goodwill.

  12. The Freight Analysis Framework Verson 4 (FAF4) - Building the FAF4 Regional Database: Data Sources and Estimation Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ho-Ling [ORNL; Hargrove, Stephanie [ORNL; Chin, Shih-Miao [ORNL; Wilson, Daniel W [ORNL; Taylor, Rob D [ORNL; Davidson, Diane [ORNL

    2016-09-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive national picture of freight movements among states and major metropolitan areas by all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns the flows to the transportation network, and projects freight flow patterns into the future. The FAF4 is the fourth database of its kind, FAF1 provided estimates for truck, rail, and water tonnage for calendar year 1998, FAF2 provided a more complete picture based on the 2002 Commodity Flow Survey (CFS) and FAF3 made further improvements building on the 2007 CFS. Since the first FAF effort, a number of changes in both data sources and products have taken place. The FAF4 flow matrix described in this report is used as the base-year data to forecast future freight activities, projecting shipment weights and values from year 2020 to 2045 in five-year intervals. It also provides the basis for annual estimates to the FAF4 flow matrix, aiming to provide users with the timeliest data. Furthermore, FAF4 truck freight is routed on the national highway network to produce the FAF4 network database and flow assignments for truck. This report details the data sources and methodologies applied to develop the base year 2012 FAF4 database. An overview of the FAF4 components is briefly discussed in Section 2. Effects on FAF4 from the changes in the 2012 CFS are highlighted in Section 3. Section 4 provides a general discussion on the process used in filling data gaps within the domestic CFS matrix, specifically on the estimation of CFS suppressed/unpublished cells. Over a dozen CFS OOS components of FAF4 are then addressed in Section 5 through Section 11 of this report. This includes discussions of farm-based agricultural shipments in Section 5, shipments from fishery and logging sectors in Section 6. Shipments of municipal solid wastes and debris from construction

  13. PRELIMINARY METHODOLOGICAL PROPOSAL FOR ESTIMATING ENVIRONMENTAL FLOWS IN PROJECTS APPROVED BY THE MINISTRY OF ENVIRONMENT AND SUSTAINABLE DEVELOPMENT (MADS, COLOMBIA

    Directory of Open Access Journals (Sweden)

    Gabriel A. Pinilla Agudelo

    2014-01-01

    Full Text Available ABSTRACT A methodological proposal for estimating environmental flows in large projects approved by Agencia Nacional de Licencias Ambientales (ANLA in Colombian rivers was developed. The project is the result of an agreement between the MADS and the Universidad Nacional de Colombia, Bogotá (UNC. The proposed method begins with an evaluation of hydrological criteria,continues with a hydraulic and water quality validation, and follows with the determination of habitat integrity. This is an iterative process that compares conditions before and after the project construction and allows to obtain the magnitude of a monthly flow that, besides preserving the ecological functions of the river, guarantees the water uses downstream. Regarding to the biotic component, the proposal includes the establishment and monitoring of biotic integrity indices for four aquatic communities (periphyton, macroinvertebrates, riparian vegetation, and fish. The effects that flow reduction may produce in the medium and long term can be assessed by these indices. We present the results of applying the methodology to several projects licensed by the MADS. RESUMEN Se presenta una propuesta metodológica para estimar los caudales ambientales en grandes proyectos licenciados por la Agencia Nacional de Licencias Ambientales (ANLA de Colombia, resultado de un convenio interadministrativo suscrito entre el ahora Ministerio de Ambiente y Desarrollo Sostenible (MADS de Colombia y la Universidad Nacional de Colombia, Bogotá (UNC. El método propuesto parte de garantizar criterios hidrológicos, continúa con una validación hidráulica y de calidad del agua, sigue con la determinación de la integridad del hábitat, en un proceso iterativo que requiere evaluación para las condiciones antes y después de la construcción del proyecto y que permite establecer un caudal que, además de conservar las funciones ecológicas del río, garantiza los usos del recurso aguas abajo. Espec

  14. Hydrologic evaluation methodology for estimating water movement through the unsaturated zone at commercial low-level radioactive waste disposal site

    Science.gov (United States)

    Meyer, P.D.; Rockhold, M.L.; Nichols, W.E.; Gee, G.W.

    1996-01-01

    This report identifies key technical issues related to hydrologic assessment of water flow in the unsaturated zone at low-level radioactive waste (LLW) disposal facilities. In addition, a methodology for incorporating these issues in the performance assessment of proposed LLW disposal facilities is identified and evaluated. The issues discussed fall into four areas:Estimating the water balance at a site (i.e., infiltration, runoff, water storage, evapotranspiration, and recharge);Analyzing the hydrologic performance of engineered components of a facility;Evaluating the application of models to the prediction of facility performance; andEstimating the uncertainty in predicted facility performance.An estimate of recharge at a LLW site is important since recharge is a principal factor in controlling the release of contaminants via the groundwater pathway. The most common methods for estimating recharge are discussed in Chapter 2. Many factors affect recharge; the natural recharge at an undisturbed site is not necessarily representative either of the recharge that will occur after the site has been disturbed or of the flow of water into a disposal facility at the site. Factors affecting recharge are discussed in Chapter 2.At many sites engineered components are required for a LLW facility to meet performance requirements. Chapter 3 discusses the use of engineered barriers to control the flow of water in a LLW facility, with a particular emphasis on cover systems. Design options and the potential performance and degradation mechanisms of engineered components are also discussed.Water flow in a LLW disposal facility must be evaluated before construction of the facility. In addition, hydrologic performance must be predicted over a very long time frame. For these reasons, the hydrologic evaluation relies on the use of predictive modeling. In Chapter 4, the evaluation of unsaturated water flow modeling is discussed. A checklist of items is presented to guide the evaluation

  15. Methodologies for estimating air emissions from three non-traditional source categories: Oil spills, petroleum vessel loading and unloading, and cooling towers. Final report, October 1991-March 1993

    International Nuclear Information System (INIS)

    Ramadan, W.; Sleva, S.; Dufner, K.; Snow, S.; Kersteter, S.L.

    1993-04-01

    The report discusses part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plan (SIP) area source methodologies and to develop appropriate emissions estimation methodologies and emission factors for a group of these source categories. Based on the results of the identification and characterization portions of this research, three source categories were selected for methodology and emission factor development: oil spills, petroleum vessel loading and unloading, and cooling towers. The report describes the category selection process and presents emissions estimation methodologies and emission factor data for the selected source categories. The discussions for each category include general background information, emissions generation activities, pollutants emitted, sources of activity and pollutant data, emissions estimation methodologies and data issues. The information used in these discussions was derived from various sources including available literature, industrial and trade association publications and contracts, experts on the category and activity, and knowledgeable federal and state personnel

  16. Estimates of future inundation of salt marshes in response to sea-level rise in and around Acadia National Park, Maine

    Science.gov (United States)

    Nielsen, Martha G.; Dudley, Robert W.

    2013-01-01

    Salt marshes are ecosystems that provide many important ecological functions in the Gulf of Maine. The U.S. Geological Survey investigated salt marshes in and around Acadia National Park from Penobscot Bay to the Schoodic Peninsula to map the potential for landward migration of marshes using a static inundation model of a sea-level rise scenario of 60 centimeters (cm; 2 feet). The resulting inundation contours can be used by resource managers to proactively adapt to sea-level rise by identifying and targeting low-lying coastal areas adjacent to salt marshes for conservation or further investigation, and to identify risks to infrastructure in the coastal zone. For this study, the mapping of static inundation was based on digital elevation models derived from light detection and ranging (LiDAR) topographic data collected in October 2010. Land-surveyed control points were used to evaluate the accuracy of the LiDAR data in the study area, yielding a root mean square error of 11.3 cm. An independent accuracy assessment of the LiDAR data specific to salt-marsh land surfaces indicated a root mean square error of 13.3 cm and 95-percent confidence interval of ± 26.0 cm. LiDAR-derived digital elevation models and digital color aerial photography, taken during low tide conditions in 2008, with a pixel resolution of 0.5 meters, were used to identify the highest elevation of the land surface at each salt marsh in the study area. Inundation contours for 60-cm of sea-level rise were delineated above the highest marsh elevation for each marsh. Confidence interval contours (95-percent,± 26.0 cm) were delineated above and below the 60-cm inundation contours, and artificial structures, such as roads and bridges, that may present barriers to salt-marsh migration were mapped. This study delineated 114 salt marshes totaling 340 hectares (ha), ranging in size from 0.11 ha (marshes less than 0.2 ha were mapped only if they were on Acadia National Park property) to 52 ha, with a median

  17. Long term rise of a free aquifer in Sahel: hydrodynamic and radioisotopic estimations (3H, 14C) of the recharge in SW Niger

    International Nuclear Information System (INIS)

    Favreau, G.

    2001-01-01

    This article summarizes an hydrodynamic and geochemical survey carried out in SW Niger in order to estimate the impact of rainfall changes and deforestation on the recharge of the uppermost Cretaceous aquifer. 14 C and 3 H activities of the total dissolved inorganic carbon have been used to quantify the long-term recharge of the aquifer. (J.S.)

  18. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    Directory of Open Access Journals (Sweden)

    Jing eWang

    2015-05-01

    Full Text Available Flow cytometry (FCM is a commonly used method for estimating genome size in many organisms. The use of flow cytometry in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most representative and diverse genera of the Old World Gesneriaceae, to evaluate the methodology effect on determining genome size. Our results showed that buffer choice significantly affected genome size estimation in six out of the eight species examined and altered the 2C-value (DNA content by as much as 21.4%. The staining duration and propidium iodide (PI concentration slightly affected the 2C-value. Our experiments showed better histogram quality when the samples were stained for 40 minutes at a PI concentration of 100 µg ml-1. The quality of the estimates was not improved by one-day incubation in the dark at 4 °C or by centrifugation. Thus, our study determined an optimum protocol for genome size measurement in Primulina: LB01 buffer supplemented with 100 µg ml-1 PI and stained for 40 minutes. This protocol also demonstrated a high universality in other Gesneriaceae genera. We report the genome size of nine Gesneriaceae species for the first time. The results showed substantial genome size variation both within and among the species, with the 2C-value ranging between 1.62 and 2.71 pg. Our study highlights the necessity of optimizing the FCM methodology prior to obtaining reliable genome size estimates in a given taxon.

  19. Application of best estimate and uncertainty safety analysis methodology to loss of flow events at Ontario's Power Generation's Darlington Nuclear Generating Station

    International Nuclear Information System (INIS)

    Huget, R.G.; Lau, D.K.; Luxat, J.C.

    2001-01-01

    Ontario Power Generation (OPG) is currently developing a new safety analysis methodology based on best estimate and uncertainty (BEAU) analysis. The framework and elements of the new safety analysis methodology are defined. The evolution of safety analysis technology at OPG has been thoroughly documented. Over the years, the use of conservative limiting assumptions in OPG safety analyses has led to gradual erosion of predicted safety margins. The main purpose of the new methodology is to provide a more realistic quantification of safety margins within a probabilistic framework, using best estimate results, with an integrated accounting of the underlying uncertainties. Another objective of the new methodology is to provide a cost-effective means for on-going safety analysis support of OPG's nuclear generating stations. Discovery issues and plant aging effects require that the safety analyses be periodically revised and, in the past, the cost of reanalysis at OPG has been significant. As OPG enters the new competitive marketplace for electricity, there is a strong need to conduct safety analysis in a less cumbersome manner. This paper presents the results of the first licensing application of the new methodology in support of planned design modifications to the shutdown systems (SDSs) at Darlington Nuclear Generating Station (NGS). The design modifications restore dual trip parameter coverage over the full range of reactor power for certain postulated loss-of-flow (LOF) events. The application of BEAU analysis to the single heat transport pump trip event provides a realistic estimation of the safety margins for the primary and backup trip parameters. These margins are significantly larger than those predicted by conventional limit of the operating envelope (LOE) analysis techniques. (author)

  20. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  1. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  2. Advanced Best-Estimate Methodologies for Thermal-Hydraulics Stability Analyses with TRACG code and Improvements on Operating Boiling Water Reactors

    International Nuclear Information System (INIS)

    Vedovi, J.; Trueba, M.; Ibarra, L; Espino, M.; Hoang, H.

    2016-01-01

    In recent years GE Hitachi has introduced two advanced methodologies to address the thermal-hydraulics instabilities in Boiling Water Reactors (BWRs); the “Detect and Suppress Solution - Confirmation Density (DSS-CD)” and the “GEH Simplified Stability Solution (GS3).” These two methodologies are based on Best-Estimate Plus Uncertainty (BEPU) analyses and provide significant improvement on safety, plant maneuvering and fuel economics with respect to existing solutions. DSS-CD and GS3 solutions have been recently approved by the United States Nuclear Regulatory Commission. This paper describes the main characteristics of these two stability methodologies and shares the experience of their recent implementation in operating BWRs. The BEPU approach provided a much deeper understanding of the parameters affecting instabilities in operating BWRs and allowed for better calculation of plant setpoints by improving plant manoeuvring restrictions and reducing manual operator actions. DSS-CD and GS3 methodologies are both based on safety analyses performed with the best-estimate system code TRACG. The assessment of uncertainty is performed following the Code Scaling, Applicability and Uncertainty (CSAU) methodology documented in NUREG/CR-5249. The two solutions have been already implemented in a combined 18 BWR units with 7 more units in the process of transitioning. The main results demonstrate a significant decrease (>0.1) in the stability based Operating Limit Minimum Critical Power Ratio (OLMCPR), which possibly results in significant fuel savings and the increase in allowable stability plant setpoints that address instability events such as the one occurred at the Fermi 2 plant in 2015 and can help prevent unnecessary Scrams. The paper also describes the advantages of reduced plant manoeuvring as a result to transitioning to these solutions; in particular the history of a BWR/6 transition to DSS-CD is discussed.

  3. On-Line Flutter Prediction Tool for Wind Tunnel Flutter Testing using Parameter Varying Estimation Methodology, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology, Inc. (ZONA) proposes to develop an on-line flutter prediction tool for wind tunnel model using the parameter varying estimation (PVE) technique to...

  4. Comparison of the COMRADEX-IV and AIRDOS-EPA methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere

    International Nuclear Information System (INIS)

    Miller, C.W.; Hoffman, F.O.; Dunning, D.E. Jr.

    1981-01-01

    This report presents a comparison between two computerized methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere. The COMRADEX-IV code was designed to provide a means of assessing potential radiological consequences from postulated power reactor accidents. The AIRDOS-EPA code was developed primarily to assess routine radionuclide releases from nuclear facilities. Although a number of different calculations are performed by these codes, three calculations are in common - atmospheric dispersion, estimation of internal dose from inhalation, and estimation of external dose from immersion in air containing gamma emitting radionuclides. The models used in these calculations were examined and found, in general, to be the same. Most differences in the doses calculated by the two codes are due to differences in values chosen for input parameters and not due to model differences. A sample problem is presented for illustration

  5. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk"? submarine study

    Science.gov (United States)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-03-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the "Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release are performed. The analysis showed that the possible deposition fractions of 1011 over the Kola Peninsula, and 10-12 - 10-13 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  6. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk" submarine study

    Science.gov (United States)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-06-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq) are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2) over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2) for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  7. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: 'Kursk' submarine study

    Directory of Open Access Journals (Sweden)

    A. Baklanov

    2003-01-01

    Full Text Available There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.. Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1 probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2 forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2 over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  8. A Comprehensive Methodology for Development, Parameter Estimation, and Uncertainty Analysis of Group Contribution Based Property Models -An Application to the Heat of Combustion

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    of the prediction. The methodology is evaluated through development of a GC method for the prediction of the heat of combustion (ΔHco) for pure components. The results showed that robust regression lead to best performance statistics for parameter estimation. The bootstrap method is found to be a valid alternative......A rigorous methodology is developed that addresses numerical and statistical issues when developing group contribution (GC) based property models such as regression methods, optimization algorithms, performance statistics, outlier treatment, parameter identifiability, and uncertainty...... identifiability issues, reporting of the 95% confidence intervals of the predicted property values should be mandatory as opposed to reporting only single value prediction, currently the norm in literature. Moreover, inclusion of higher order groups (additional parameters) does not always lead to improved...

  9. METHODOLOGY FOR ESTIMATION OF STATIONARY AND DYNAMIC PARAMETERS FOR LIQUEFIED PETROLEUM GAS (LPG RE-LIQUEFACTION IN SPHERICAL STORAGE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Alexander Mendoza-Acosta

    2017-07-01

    Full Text Available Temperature difference between the environment and the liquid contained in the liquefied petroleum gas storage spheres produces: net inward heat flow, increase in stored fuel temperature, LPG partial vaporization and consequently an increase in storage pressure. In order to maintain adequate safety conditions, since uncontrolled pressure increases could lead to risky situations and economic losses, re-liquefaction systems, consisting on auto refrigeration units, are installed; this system extracts the evaporated gas, compress it and then condense it again in a closed cooling cycle. Frequently these systems are designed using heuristic criteria, without considering the calculations necessary for correct equipment sizing; this results in costly modifications or in oversized equipment. In the present article, a simple but effective methodology for the calculation of thermal loads, daily temperature increase rate and pressure accumulation and restore times is presented, the methodology was compared with real data, through data acquisition and processing during the summer months of 2015 and 2016 for 12 storage spheres in a gas company located in a coastal state of Mexico, finding that the values predicted for the rate of daily temperature increase and recovery times are statistically consistent with the experimental data.

  10. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  11. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  12. Methodology to estimate the cost of the severe accidents risk / maximum benefit; Metodologia para estimar el costo del riesgo de accidentes severos / beneficio maximo

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, G.; Flores, R. M.; Vega, E., E-mail: gozalo.mendoza@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)

  13. Methodology for Time-Domain Estimation of Storm-Time Electric Fields Using the 3D Earth Impedance

    Science.gov (United States)

    Kelbert, A.; Balch, C. C.; Pulkkinen, A. A.; Egbert, G. D.; Love, J. J.; Rigler, E. J.; Fujii, I.

    2016-12-01

    Magnetic storms can induce geoelectric fields in the Earth's electrically conducting interior, interfering with the operations of electric-power grid industry. The ability to estimate these electric fields at Earth's surface in close to real-time and to provide local short-term predictions would improve the ability of the industry to protect their operations. At any given time, the electric field at the Earth's surface is a function of the time-variant magnetic activity (driven by the solar wind), and the local electrical conductivity structure of the Earth's crust and mantle. For this reason, implementation of an operational electric field estimation service requires an interdisciplinary, collaborative effort between space science, real-time space weather operations, and solid Earth geophysics. We highlight in this talk an ongoing collaboration between USGS, NOAA, NASA, Oregon State University, and the Japan Meteorological Agency, to develop algorithms that can be used for scenario analyses and which might be implemented in a real-time, operational setting. We discuss the development of a time domain algorithm that employs discrete time domain representation of the impedance tensor for a realistic 3D Earth, known as the discrete time impulse response (DTIR), convolved with the local magnetic field time series, to estimate the local electric field disturbances. The algorithm is validated against measured storm-time electric field data collected in the United States and Japan. We also discuss our plans for operational real-time electric field estimation using 3D Earth impedances.

  14. Comparison of two methodologies to estimate microbial activity in a pumpkin crop (Cucurbita maxima) in blooming and maturity phases

    International Nuclear Information System (INIS)

    Cadena, Silvio F; Madrinan, Raul

    1999-01-01

    The measure of soil microbial activity is a significant feature in fertility and conservation diagnostic thinking about it, we compared two methodologies: volumetric Calcimeter that measures CO 2 released by soil into closed atmospheric system (glass tubes), cab method that retains CO 2 released during incubation phase in a closed system (glass bottles) at 23 degrades celsius subsequently titrate with HCL 0.5N. Results showed microbial activity in a pumpkin crop (Cucurbita maxima) in Palmira, Valle del Cauca - Colombia, is higher in maturity than blooming phase. It occurs beca use contribution of nutritive substances from pumpkin's roots as it is in physiological maturity and microclimate offered by full foliage of pumpkin. Because in CAB method, soil is put on trial with its natural wet the numeric results express as mgC-CO 2 .g 1 of soil are most reliable than volumetric Calcimeter method. The cost analysis showed that cab is twenty percent cheaper than volumetric Calcimeter method

  15. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    Science.gov (United States)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure

  16. Methodology for time-domain estimation of storm time geoelectric fields using the 3-D magnetotelluric response tensors

    Science.gov (United States)

    Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko

    2017-07-01

    Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.

  17. Methodological issues in the estimation of parental time – Analysis of measures in a Canadian time-use survey

    OpenAIRE

    Cara B. Fedick; Shelley Pacholok; Anne H. Gauthier

    2005-01-01

    Extensive small scale studies have documented that when people assume the role of assisting a person with impairments or an older person, care activities account for a significant portion of their daily routines. Nevertheless, little research has investigated the problem of measuring the time that carers spend in care-related activities. This paper contrasts two different measures of care time – an estimated average weekly hours question in the 1998 Australian Survey of Disability, Ageing and...

  18. The methodology proposed to estimate the absorbed dose at the entrance of the labyrinth in HDR brachytherapy facilities with IR-192

    International Nuclear Information System (INIS)

    Pujades, M. C.; Perez-Calatayud, J.; Ballester, F.

    2012-01-01

    In the absence of procedure for evaluating the design of a brachytherapy (BT) vault with maze from the point of view of radiation protection, usually formalism of external radiation is adapted. The purpose of this study is to adapt the methodology described by the National council on Radiological Protection and Measurements Report 151 (NCRP 151). Structural Shielding Design for megavoltage X-and Gamma-Ray Radiotherapy facilities, for estimating dose at the door in BT and its comparison with the results megavoltage X-and Gamma-Ray Radiotherapy Facilities, for estimating dose at the door in BT and its comparison with the results obtained by the method of Monte Carlo (MC) for a special case of bunker. (Author) 17 refs.

  19. Utility of Capture-Recapture Methodology to Estimate Prevalence of Congenital Heart Defects Among Adolescents in 11 New York State Counties: 2008 to 2010.

    Science.gov (United States)

    Akkaya-Hocagil, Tugba; Hsu, Wan-Hsiang; Sommerhalter, Kristin; McGarry, Claire; Van Zutphen, Alissa

    2017-11-01

    Congenital heart defects (CHDs) are the most common birth defects in the United States, and the population of individuals living with CHDs is growing. Though CHD prevalence in infancy has been well characterized, better prevalence estimates among children and adolescents in the United States are still needed. We used capture-recapture methods to estimate CHD prevalence among adolescents residing in 11 New York counties. The three data sources used for analysis included Statewide Planning and Research Cooperative System (SPARCS) hospital inpatient records, SPARCS outpatient records, and medical records provided by seven pediatric congenital cardiac clinics from 2008 to 2010. Bayesian log-linear models were fit using the R package Conting to account for dataset dependencies and heterogeneous catchability. A total of 2537 adolescent CHD cases were captured in our three data sources. Forty-four cases were identified in all data sources, 283 cases were identified in two of three data sources, and 2210 cases were identified in a single data source. The final model yielded an estimated total adolescent CHD population of 3845, indicating that 66% of the cases in the catchment area were identified in the case-identifying data sources. Based on 2010 Census estimates, we estimated adolescent CHD prevalence as 6.4 CHD cases per 1000 adolescents (95% confidence interval: 6.2-6.6). We used capture-recapture methodology with a population-based surveillance system in New York to estimate CHD prevalence among adolescents. Future research incorporating additional data sources may improve prevalence estimates in this population. Birth Defects Research 109:1423-1429, 2017.© 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Methodologies on estimating the energy requirements for maintenance and determining the net energy contents of feed ingredients in swine: a review of recent work.

    Science.gov (United States)

    Li, Zhongchao; Liu, Hu; Li, Yakui; Lv, Zhiqian; Liu, Ling; Lai, Changhua; Wang, Junjun; Wang, Fenglai; Li, Defa; Zhang, Shuai

    2018-01-01

    In the past two decades, a considerable amount of research has focused on the determination of the digestible (DE) and metabolizable energy (ME) contents of feed ingredients fed to swine. Compared with the DE and ME systems, the net energy (NE) system is assumed to be the most accurate estimate of the energy actually available to the animal. However, published data pertaining to the measured NE content of ingredients fed to growing pigs are limited. Therefore, the Feed Data Group at the Ministry of Agricultural Feed Industry Centre (MAFIC) located at China Agricultural University has evaluated the NE content of many ingredients using indirect calorimetry. The present review summarizes the NE research works conducted at MAFIC and compares these results with those from other research groups on methodological aspect. These research projects mainly focus on estimating the energy requirements for maintenance and its impact on the determination, prediction, and validation of the NE content of several ingredients fed to swine. The estimation of maintenance energy is affected by methodology, growth stage, and previous feeding level. The fasting heat production method and the curvilinear regression method were used in MAFIC to estimate the NE requirement for maintenance. The NE contents of different feedstuffs were determined using indirect calorimetry through standard experimental procedure in MAFIC. Previously generated NE equations can also be used to predict NE in situations where calorimeters are not available. Although popular, the caloric efficiency is not a generally accepted method to validate the energy content of individual feedstuffs. In the future, more accurate and dynamic NE prediction equations aiming at specific ingredients should be established, and more practical validation approaches need to be developed.

  1. Forest Cover Estimation in Ireland Using Radar Remote Sensing: A Comparative Analysis of Forest Cover Assessment Methodologies

    Science.gov (United States)

    Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John

    2015-01-01

    Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681

  2. MIRD methodology

    International Nuclear Information System (INIS)

    Rojo, Ana M.; Gomez Parada, Ines

    2004-01-01

    The MIRD (Medical Internal Radiation Dose) system was established by the Society of Nuclear Medicine of USA in 1960 to assist the medical community in the estimation of the dose in organs and tissues due to the incorporation of radioactive materials. Since then, 'MIRD Dose Estimate Report' (from the 1 to 12) and 'Pamphlets', of great utility for the dose calculations, were published. The MIRD system was planned essentially for the calculation of doses received by the patients during nuclear medicine diagnostic procedures. The MIRD methodology for the absorbed doses calculations in different tissues is explained

  3. Application of realistic (best- estimate) methodologies for large break loss of coolant (LOCA) safety analysis: licensing of Westinghouse ASTRUM evaluation model in Spain

    International Nuclear Information System (INIS)

    Lage, Carlos; Frepoli, Cesare

    2010-01-01

    When the LOCA Final Acceptance Criteria for Light Water Reactors was issued in Appendix K of 10CFR50 both the USNRC and the industry recognized that the rule was highly conservative. At that time, however, the degree of conservatism in the analysis could not be quantified. As a result, the USNRC began a research program to identify the degree of conservatism in those models permitted in the Appendix K rule and to develop improved thermal-hydraulic computer codes so that realistic accident analysis calculations could be performed. The overall results of this research program quantified the conservatism in the Appendix K rule and confirmed that some relaxation of the rule can be made without a loss in safety to the public. Also, from a risk-informed perspective it is recognized that conservatism is not always a complete defense for lack of sophistication in models. In 1988, as a result of the improved understanding of LOCA phenomena, the USNRC staff amended the requirements of 10 CFR 50.46 and Appendix K, 'ECCS Evaluation Models', so that a realistic evaluation model may be used to analyze the performance of the ECCS during a hypothetical LOCA. Under the amended rules, best-estimate plus uncertainty (BEPU) thermal-hydraulic analysis may be used in place of the overly prescriptive set of models mandated by Appendix K rule. Further guidance for the use of best-estimate codes was provided in Regulatory Guide 1.157 To demonstrate use of the revised ECCS rule, the USNRC and its consultants developed a method called the Code Scaling, Applicability, and Uncertainty (CSAU) evaluation methodology as an approach for defining and qualifying a best-estimate thermal-hydraulic code and quantifying the uncertainties in a LOCA analysis. More recently the CSAU principles have been generalized in the Evaluation Model Development and Assessment Process (EMDAP) of Regulatory Guide 1.203. ASTRUM is the Westinghouse Best Estimate Large Break LOCA evaluation model applicable to two-, three

  4. Consequences of using different soil texture determination methodologies for soil physical quality and unsaturated zone time lag estimates.

    Science.gov (United States)

    Fenton, O; Vero, S; Ibrahim, T G; Murphy, P N C; Sherriff, S C; Ó hUallacháin, D

    2015-11-01

    Elucidation of when the loss of pollutants, below the rooting zone in agricultural landscapes, affects water quality is important when assessing the efficacy of mitigation measures. Investigation of this inherent time lag (t(T)) is divided into unsaturated (t(u)) and saturated (t(s)) components. The duration of these components relative to each other differs depending on soil characteristics and the landscape position. The present field study focuses on tu estimation in a scenario where the saturated zone is likely to constitute a higher proportion of t(T). In such instances, or where only initial breakthrough (IBT) or centre of mass (COM) is of interest, utilisation of site and depth specific "simple" textural class or actual sand-silt-clay percentages to generate soil water characteristic curves with associated soil hydraulic parameters is acceptable. With the same data it is also possible to estimate a soil physical quality (S) parameter for each soil layer which can be used to infer many other physical, chemical and biological quality indicators. In this study, hand texturing in the field was used to determine textural classes of a soil profile. Laboratory methods, including hydrometer, pipette and laser diffraction methods were used to determine actual sand-silt-clay percentages of sections of the same soil profile. Results showed that in terms of S, hand texturing resulted in a lower index value (inferring a degraded soil) than that of pipette, hydrometer and laser equivalents. There was no difference between S index values determined using the pipette, hydrometer and laser diffraction methods. The difference between the three laboratory methods on both the IBT and COM stages of t(u) were negligible, and in this instance were unlikely to affect either groundwater monitoring decisions, or to be of consequence from a policy perspective. When t(u) estimates are made over the full depth of the vadose zone, which may extend to several metres, errors resulting from

  5. Consequences of using different soil texture determination methodologies for soil physical quality and unsaturated zone time lag estimates

    Science.gov (United States)

    Fenton, O.; Vero, S.; Ibrahim, T. G.; Murphy, P. N. C.; Sherriff, S. C.; Ó hUallacháin, D.

    2015-11-01

    Elucidation of when the loss of pollutants, below the rooting zone in agricultural landscapes, affects water quality is important when assessing the efficacy of mitigation measures. Investigation of this inherent time lag (tT) is divided into unsaturated (tu) and saturated (ts) components. The duration of these components relative to each other differs depending on soil characteristics and the landscape position. The present field study focuses on tu estimation in a scenario where the saturated zone is likely to constitute a higher proportion of tT. In such instances, or where only initial breakthrough (IBT) or centre of mass (COM) is of interest, utilisation of site and depth specific "simple" textural class or actual sand-silt-clay percentages to generate soil water characteristic curves with associated soil hydraulic parameters is acceptable. With the same data it is also possible to estimate a soil physical quality (S) parameter for each soil layer which can be used to infer many other physical, chemical and biological quality indicators. In this study, hand texturing in the field was used to determine textural classes of a soil profile. Laboratory methods, including hydrometer, pipette and laser diffraction methods were used to determine actual sand-silt-clay percentages of sections of the same soil profile. Results showed that in terms of S, hand texturing resulted in a lower index value (inferring a degraded soil) than that of pipette, hydrometer and laser equivalents. There was no difference between S index values determined using the pipette, hydrometer and laser diffraction methods. The difference between the three laboratory methods on both the IBT and COM stages of tu were negligible, and in this instance were unlikely to affect either groundwater monitoring decisions, or to be of consequence from a policy perspective. When tu estimates are made over the full depth of the vadose zone, which may extend to several metres, errors resulting from the use of

  6. An inventory of nitrous oxide emissions from agriculture in the UK using the IPCC methodology: emission estimate, uncertainty and sensitivity analysis

    Science.gov (United States)

    Brown, L.; Armstrong Brown, S.; Jarvis, S. C.; Syed, B.; Goulding, K. W. T.; Phillips, V. R.; Sneath, R. W.; Pain, B. F.

    Nitrous oxide emission from UK agriculture was estimated, using the IPCC default values of all emission factors and parameters, to be 87 Gg N 2O-N in both 1990 and 1995. This estimate was shown, however, to have an overall uncertainty of 62%. The largest component of the emission (54%) was from the direct (soil) sector. Two of the three emission factors applied within the soil sector, EF1 (direct emission from soil) and EF3 PRP (emission from pasture range and paddock) were amongst the most influential on the total estimate, producing a ±31 and +11% to -17% change in emissions, respectively, when varied through the IPCC range from the default value. The indirect sector (from leached N and deposited ammonia) contributed 29% of the total emission, and had the largest uncertainty (126%). The factors determining the fraction of N leached (Frac LEACH) and emissions from it (EF5), were the two most influential. These parameters are poorly specified and there is great potential to improve the emission estimate for this component. Use of mathematical models (NCYCLE and SUNDIAL) to predict Frac LEACH suggested that the IPCC default value for this parameter may be too high for most situations in the UK. Comparison with other UK-derived inventories suggests that the IPCC methodology may overestimate emission. Although the IPCC approach includes additional components to the other inventories (most notably emission from indirect sources), estimates for the common components (i.e. fertiliser and animals), and emission factors used, are higher than those of other inventories. Whilst it is recognised that the IPCC approach is generalised in order to allow widespread applicability, sufficient data are available to specify at least two of the most influential parameters, i.e. EF1 and Frac LEACH, more accurately, and so provide an improved estimate of nitrous oxide emissions from UK agriculture.

  7. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  8. Multibody Kinematics Optimization for the Estimation of Upper and Lower Limb Human Joint Kinematics: A Systematized Methodological Review.

    Science.gov (United States)

    Begon, Mickaël; Andersen, Michael Skipper; Dumas, Raphaël

    2018-03-01

    Multibody kinematics optimization (MKO) aims to reduce soft tissue artefact (STA) and is a key step in musculoskeletal modeling. The objective of this review was to identify the numerical methods, their validation and performance for the estimation of the human joint kinematics using MKO. Seventy-four papers were extracted from a systematized search in five databases and cross-referencing. Model-derived kinematics were obtained using either constrained optimization or Kalman filtering to minimize the difference between measured (i.e., by skin markers, electromagnetic or inertial sensors) and model-derived positions and/or orientations. While hinge, universal, and spherical joints prevail, advanced models (e.g., parallel and four-bar mechanisms, elastic joint) have been introduced, mainly for the knee and shoulder joints. Models and methods were evaluated using: (i) simulated data based, however, on oversimplified STA and joint models; (ii) reconstruction residual errors, ranging from 4 mm to 40 mm; (iii) sensitivity analyses which highlighted the effect (up to 36 deg and 12 mm) of model geometrical parameters, joint models, and computational methods; (iv) comparison with other approaches (i.e., single body kinematics optimization and nonoptimized kinematics); (v) repeatability studies that showed low intra- and inter-observer variability; and (vi) validation against ground-truth bone kinematics (with errors between 1 deg and 22 deg for tibiofemoral rotations and between 3 deg and 10 deg for glenohumeral rotations). Moreover, MKO was applied to various movements (e.g., walking, running, arm elevation). Additional validations, especially for the upper limb, should be undertaken and we recommend a more systematic approach for the evaluation of MKO. In addition, further model development, scaling, and personalization methods are required to better estimate the secondary degrees-of-freedom (DoF).

  9. A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2

    Directory of Open Access Journals (Sweden)

    A. R. Zanatta

    2017-07-01

    Full Text Available Titanium-dioxide (TiO2 is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases − Rutile, Anatase and, less commonly, Brookite − and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.

  10. Solar thermal technology development: Estimated market size and energy cost savings. Volume 2: Assumptions, methodology and results

    Science.gov (United States)

    Gates, W. R.

    1983-02-01

    Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. Three fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. Solar thermal technology research and development (R&D) is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), depending on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest. Analysis is also provided regarding two federal incentives currently in use: The Federal Business Energy Tax Credit and direct R&D funding.

  11. Solar thermal technology development: Estimated market size and energy cost savings. Volume 2: Assumptions, methodology and results

    Science.gov (United States)

    Gates, W. R.

    1983-01-01

    Estimated future energy cost savings associated with the development of cost-competitive solar thermal technologies (STT) are discussed. Analysis is restricted to STT in electric applications for 16 high-insolation/high-energy-price states. Three fuel price scenarios and three 1990 STT system costs are considered, reflecting uncertainty over future fuel prices and STT cost projections. Solar thermal technology research and development (R&D) is found to be unacceptably risky for private industry in the absence of federal support. Energy cost savings were projected to range from $0 to $10 billion (1990 values in 1981 dollars), depending on the system cost and fuel price scenario. Normal R&D investment risks are accentuated because the Organization of Petroleum Exporting Countries (OPEC) cartel can artificially manipulate oil prices and undercut growth of alternative energy sources. Federal participation in STT R&D to help capture the potential benefits of developing cost-competitive STT was found to be in the national interest. Analysis is also provided regarding two federal incentives currently in use: The Federal Business Energy Tax Credit and direct R&D funding.

  12. The South Wilmington Area remedial cost estimating methodology (RCEM) -- A planning tool and reality check for brownfield development

    International Nuclear Information System (INIS)

    Yancheski, T.B.; Swanson, J.E.

    1996-01-01

    The South Wilmington Area (SWA), which is comprised of 200 acres of multi-use urban lowlands adjacent to the Christina River, is a brownfields area that has been targeted for redevelopment/restoration as part of a major waterfront revitalization project for the City of Wilmington, Delaware. The vision for this riverfront development, which is being promoted by a state-funded development corporation, includes plans for a new harbor, convention and entertainment facilities, upscale residences, an urban wildlife refuge, and the restoration of the Christina River. However, the environmental quality of the SWA has been seriously impacted by an assortment of historic and current heavy industrial land-uses since the late 1800's, and extensive environmental cleanup of this area will be required as part of any redevelopment plan. Given that the environmental cleanup cost will be a major factor in determining the overall economic feasibility of brownfield development in the SWA, a reliable means of estimating potential preliminary remedial costs, without the expense of costly investigative and engineering studies, was needed to assist with this redevelopment initiative. The primary chemicals-of-concern (COCs) area-wide are lead and petroleum compounds, however, there are hot-spot occurrences of polynuclear aromatic hydrocarbons (PAHs), PCBs, and other heavy metals such as arsenic and mercury

  13. Estimation of block conductivities from hydrologically calibrated fracture networks. Description of methodology and application to Romuvaara investigation area

    International Nuclear Information System (INIS)

    Niemi, A.; Kontio, K.; Kuusela-Lahtinen, A.; Vaittinen, T.

    1999-03-01

    This study looks at heterogeneity in hydraulic conductivity at Romuvaara site. It concentrates on the average rock outside the deterministic fracture zones, especially in the deeper parts of the bedrock. A large number of stochastic fracture networks is generated based on geometrical data on fracture geometry from the site. The hydraulic properties of the fractures are determined by calibrating the networks against well test data. The calibration is done by starting from an initial estimate for fracture transmissivity distribution based on 2 m interval flow meter data, simulating the 10 m constant head injection test behaviour in a number of fracture network realisations and comparing the simulated well tests statistics to the measured ones. A large number of possible combinations of mean and standard deviation of fracture transmissivities are tested and the goodness-of-fit between the measured and simulated results determined by means of the bootstrapping method. As the result, a range of acceptable fracture transmissivity distribution parameters is obtained. In the accepted range, the mean of log transmissivity varies between -13.9 and -15.3 and standard deviation between 4.0 and 3.2, with increase in standard deviation compensating for decrease in mean. The effect of spatial autocorrelation was not simulated. The variogram analysis did, however, give indications that an autocorrelation range of the order of 10 m might be realistic for the present data. Based on the calibrated fracture networks, equivalent continuum conductivities of the calibrated 30 m x 30 m x 30 m conductivity blocks were determined. For each realisation, three sets of simulations was carried out with the main gradient in x, y and z directions, respectively. Based on these results the components of conductivity tensor were determined. Such data can be used e.g. for stochastic continuum type Monte Carlo simulations with larger scale models. The hydraulic conductivities in the direction of the

  14. Estimation of block conductivities from hydrologically calibrated fracture networks. Description of methodology and application to Romuvaara investigation area

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A [Royal Institute of Technology, Stockholm (Sweden); Kontio, K; Kuusela-Lahtinen, A; Vaittinen, T [VTT Communities and Infrastructure, Espoo (Finland)

    1999-03-01

    This study looks at heterogeneity in hydraulic conductivity at Romuvaara site. It concentrates on the average rock outside the deterministic fracture zones, especially in the deeper parts of the bedrock. A large number of stochastic fracture networks is generated based on geometrical data on fracture geometry from the site. The hydraulic properties of the fractures are determined by calibrating the networks against well test data. The calibration is done by starting from an initial estimate for fracture transmissivity distribution based on 2 m interval flow meter data, simulating the 10 m constant head injection test behaviour in a number of fracture network realisations and comparing the simulated well tests statistics to the measured ones. A large number of possible combinations of mean and standard deviation of fracture transmissivities are tested and the goodness-of-fit between the measured and simulated results determined by means of the bootstrapping method. As the result, a range of acceptable fracture transmissivity distribution parameters is obtained. In the accepted range, the mean of log transmissivity varies between -13.9 and -15.3 and standard deviation between 4.0 and 3.2, with increase in standard deviation compensating for decrease in mean. The effect of spatial autocorrelation was not simulated. The variogram analysis did, however, give indications that an autocorrelation range of the order of 10 m might be realistic for the present data. Based on the calibrated fracture networks, equivalent continuum conductivities of the calibrated 30 m x 30 m x 30 m conductivity blocks were determined. For each realisation, three sets of simulations was carried out with the main gradient in x, y and z directions, respectively. Based on these results the components of conductivity tensor were determined. Such data can be used e.g. for stochastic continuum type Monte Carlo simulations with larger scale models. The hydraulic conductivities in the direction of the

  15. The fraction of NO in exhaled air and estimates of alveolar NO in adolescents with asthma: methodological aspects.

    Science.gov (United States)

    Heijkenskjöld-Rentzhog, Charlotte; Alving, Kjell; Kalm-Stephens, Pia; Lundberg, Jon O; Nordvall, Lennart; Malinovschi, Andrei

    2012-10-01

    This study investigated the oral contribution to exhaled NO in young people with asthma and its potential effects on estimated alveolar NO (Calv(NO) ), a proposed marker of inflammation in peripheral airways. Secondary aims were to investigate the effects of various exhalation flow-rates and the feasibility of different proposed adjustments of (Calv(NO) ) for trumpet model and axial diffusion (TMAD). Exhaled NO at flow rates of 50-300 ml/sec, and salivary nitrite was measured before and after antibacterial mouthwash in 29 healthy young people (10-20 years) and 29 with asthma (10-19 years). Calv(NO) was calculated using the slope-intercept model with and without TMAD adjustment. Exhaled NO at 50 ml/sec decreased significantly after mouthwash, to a similar degree in asthmatic and healthy subjects (8.8% vs. 9.8%, P = 0.49). The two groups had similar salivary nitrite levels (56.4 vs. 78.4 µM, P = 0.25). Calv(NO) was not significantly decreased by mouthwash. Calv(NO) levels were similar when flow-rates between 50-200 or 100-300 ml/sec were used (P = 0.34 in asthmatics and P = 0.90 in healthy subjects). A positive association was found between bronchial and alveolar NO in asthmatic subjects and this disappeared after the TMAD-adjustment. Negative TMAD-adjusted Calv(NO) values were found in a minority of the subjects. Young people with and without asthma have similar salivary nitrite levels and oral contributions to exhaled NO and therefore no antibacterial mouthwash is necessary in routine use. TMAD corrections of alveolar NO could be successfully applied in young people with asthma and yielded negative results only in a minority of subjects. Copyright © 2012 Wiley Periodicals, Inc.

  16. Methodology to estimate the threshold in-cylinder temperature for self-ignition of fuel during cold start of Diesel engines

    International Nuclear Information System (INIS)

    Broatch, A.; Ruiz, S.; Margot, X.; Gil, A.

    2010-01-01

    Cold startability of automotive direct injection (DI) Diesel engines is frequently one of the negative features when these are compared to their closest competitor, the gasoline engine. This situation worsens with the current design trends (engine downsizing) and the emerging new Diesel combustion concepts, such as HCCI, PCCI, etc., which require low compression ratio engines. To mitigate this difficulty, pre-heating systems (glow plugs, air heating, etc.) are frequently used and their technologies have been continuously developed. For the optimum design of these systems, the determination of the threshold temperature that the gas should have in the cylinder in order to provoke the self-ignition of the fuel injected during cold starting is crucial. In this paper, a novel methodology for estimating the threshold temperature is presented. In this methodology, experimental and computational procedures are adequately combined to get a good compromise between accuracy and effort. The measurements have been used as input data and boundary conditions in 3D and 0D calculations in order to obtain the thermodynamic conditions of the gas in the cylinder during cold starting. The results obtained from the study of two engine configurations -low and high compression ratio- indicate that the threshold in-cylinder temperature is a single temperature of about 415 o C.

  17. The ISO 50001 Impact Estimator Tool (IET 50001 V1.1.4) - User Guide and Introduction to the ISO 50001 Impacts Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Therkelsen, Peter L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rao, Prakash [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Aghajanzadeh, Arian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McKane, Aimee T. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-08-01

    ISO 50001-Energy management systems – Requirements with guidance for use, is an internationally developed standard that provides organizations with a flexible framework for implementing an energy management system (EnMS) with the goal of continual energy performance improvement. The ISO 50001 standard was first published in 2011 and has since seen growth in the number of certificates issued around the world, primarily in the industrial (agriculture, manufacturing, and mining) and service (commercial) sectors. Policy makers in many regions and countries are looking to or are already using ISO 50001 as a basis for energy efficiency, carbon reduction, and other energy performance improvement schemes. The Impact Estimator Tool 50001 (IET 50001 Tool) is a computational model developed to assist researchers and policy makers determine the potential impact of ISO 50001 implementation in the industrial and service (commercial) sectors for a given region or country. The IET 50001 Tool is based upon a methodology initially developed by the Lawrence Berkeley National Laboratory that has been improved upon and vetted by a group of international researchers. By using a commonly accepted and transparent methodology, users of the IET 50001 Tool can easily and clearly communicate the potential impact of ISO 50001 for a region or country.

  18. Comparison of the Bioavailability of Waste Laden Soils Using ''In Vivo'' ''In Vitro'' Analytical Methodology and Bioaccessibility of Radionuclides for Refinement of Exposure/Dose Estimates; FINAL

    International Nuclear Information System (INIS)

    P. J. Lioy; M. Gallo; P. Georgopoulos; R. Tate; B. Buckley

    1999-01-01

    The bioavailability of soil contaminants can be measured using in vitro or in vivo techniques. Since there was no standard method for intercomparison among laboratories, we compared two techniques for bioavailability estimation: in vitro dissolution and in vivo rat feeding model for a NIST-traceable soil material. Bioaccessibility was measured using a sequential soil extraction in synthetic analogues of human saliva, gastric and intestinal fluids. Bioavailability was measured in Sprague Dawley rats by determining metal levels in the major organs and urine, feces, and blood. Bioaccessibility was found to be a good indicator of relative metal bioavailability. Results are presented from bioaccessible experiments with Cesium in contaminated DOE soils, and total alpha and beta bioaccessibility. The results indicate that the modified methodology for bioaccessibility can be used for specific radionuclide analysis

  19. Estimation of the National Disease Burden of Influenza-Associated Severe Acute Respiratory Illness in Kenya and Guatemala: A Novel Methodology

    Science.gov (United States)

    Katz, Mark A.; Lindblade, Kim A.; Njuguna, Henry; Arvelo, Wences; Khagayi, Sammy; Emukule, Gideon; Linares-Perez, Nivaldo; McCracken, John; Nokes, D. James; Ngama, Mwanajuma; Kazungu, Sidi; Mott, Joshua A.; Olsen, Sonja J.; Widdowson, Marc-Alain; Feikin, Daniel R.

    2013-01-01

    Background Knowing the national disease burden of severe influenza in low-income countries can inform policy decisions around influenza treatment and prevention. We present a novel methodology using locally generated data for estimating this burden. Methods and Findings This method begins with calculating the hospitalized severe acute respiratory illness (SARI) incidence for children Guatemala, using data from August 2009–July 2011. In Kenya (2009 population 38.6 million persons), the annual number of hospitalized influenza-associated SARI cases ranged from 17,129–27,659 for children Guatemala (2011 population 14.7 million persons), the annual number of hospitalized cases of influenza-associated pneumonia ranged from 1,065–2,259 (0.5–1.0 per 1,000 persons) among children Guatemala. This method can be performed in most low and lower-middle income countries. PMID:23573177

  20. Is sea-level rising?

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.

    correction in the estimation of trends obtained for tide gauge records. The altimeter data permits to prepare spatial maps of sea-level rise trends. We present a map prepared for the Indian Ocean (Figure 4) north of 10oS , which shows a fairly uniform... drawn information from research papers published by the author and report of the IPCC AR5 WG1 Chapter 13: Sea Level Changes, in which the author has served as a ‘Lead Author’. Figure1 is prepared using data from the University of Colorado. Nerem, R...

  1. Winding up the molecular clock in the genus Carabus (Coleoptera: Carabidae: assessment of methodological decisions on rate and node age estimation

    Directory of Open Access Journals (Sweden)

    Andújar Carmelo

    2012-03-01

    Full Text Available Abstract Background Rates of molecular evolution are known to vary across taxa and among genes, and this requires rate calibration for each specific dataset based on external information. Calibration is sensitive to evolutionary model parameters, partitioning schemes and clock model. However, the way in which these and other analytical aspects affect both the rates and the resulting clade ages from calibrated phylogenies are not yet well understood. To investigate these aspects we have conducted calibration analyses for the genus Carabus (Coleoptera, Carabidae on five mitochondrial and four nuclear DNA fragments with 7888 nt total length, testing different clock models and partitioning schemes to select the most suitable using Bayes Factors comparisons. Results We used these data to investigate the effect of ambiguous character and outgroup inclusion on both the rates of molecular evolution and the TMRCA of Carabus. We found considerable variation in rates of molecular evolution depending on the fragment studied (ranging from 5.02% in cob to 0.26% divergence/My in LSU-A, but also on analytical conditions. Alternative choices of clock model, partitioning scheme, treatment of ambiguous characters, and outgroup inclusion resulted in rate increments ranging from 28% (HUWE1 to 1000% (LSU-B and ITS2 and increments in the TMRCA of Carabus ranging from 8.4% (cox1-A to 540% (ITS2. Results support an origin of the genus Carabus during the Oligocene in the Eurasian continent followed by a Miocene differentiation that originated all main extant lineages. Conclusions The combination of several genes is proposed as the best strategy to minimise both the idiosyncratic behaviors of individual markers and the effect of analytical aspects in rate and age estimations. Our results highlight the importance of estimating rates of molecular evolution for each specific dataset, selecting for optimal clock and partitioning models as well as other methodological issues

  2. Experimentation and Prediction of Temperature Rise in Turning ...

    African Journals Online (AJOL)

    Experimentation and Prediction of Temperature Rise in Turning Process using Response Surface Methodology. ... Science, Technology and Arts Research Journal. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue ...

  3. The Rise of Iran

    DEFF Research Database (Denmark)

    Rahigh-Aghsan, Ali

    Iran is viewed as a rising power that poses an increasing threat to regional and even global security. This view is wrong for three reasons. Iran's hard and soft power is exaggerated by most accounts; it is too limited to allow the Iranians to dominate the Persian Gulf let alone the Middle East...

  4. The Rise of Iran

    DEFF Research Database (Denmark)

    Rahigh-Aghsan, Ali; Jakobsen, Peter Viggo

    2010-01-01

    Iran is viewed as a rising power that poses an increasing threat to regional and even global security. This view is wrong for three reasons. Iran's hard and soft power is exaggerated by most accounts; it is too limited to allow the Iranians to dominate the Persian Gulf let alone the Middle East...

  5. The Norwegian Emission Inventory 2010. Documentation of methodologies for estimating emissions of greenhouse gases and long-range transboundary air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sandmo, Trond (ed.)

    2010-06-15

    The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency (Klif) and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emissions models like e.g. the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2010) for documentation on this topic. This report replaces the previous documentation of the emission model, (Sandmo 2009), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: Emissions of CH{sub 4} and N{sub 2}O from well testing of crude oil off shore have been included - these have previously not been estimated Emissions of CH{sub 4} from enteric fermentation have increased for the whole

  6. A methodological framework for assessing agreement between cost-effectiveness outcomes estimated using alternative sources of data on treatment costs and effects for trial-based economic evaluations.

    Science.gov (United States)

    Achana, Felix; Petrou, Stavros; Khan, Kamran; Gaye, Amadou; Modi, Neena

    2018-01-01

    A new methodological framework for assessing agreement between cost-effectiveness endpoints generated using alternative sources of data on treatment costs and effects for trial-based economic evaluations is proposed. The framework can be used to validate cost-effectiveness endpoints generated from routine data sources when comparable data is available directly from trial case report forms or from another source. We illustrate application of the framework using data from a recent trial-based economic evaluation of the probiotic Bifidobacterium breve strain BBG administered to babies less than 31 weeks of gestation. Cost-effectiveness endpoints are compared using two sources of information; trial case report forms and data extracted from the National Neonatal Research Database (NNRD), a clinical database created through collaborative efforts of UK neonatal services. Focusing on mean incremental net benefits at £30,000 per episode of sepsis averted, the study revealed no evidence of discrepancy between the data sources (two-sided p values >0.4), low probability estimates of miscoverage (ranging from 0.039 to 0.060) and concordance correlation coefficients greater than 0.86. We conclude that the NNRD could potentially serve as a reliable source of data for future trial-based economic evaluations of neonatal interventions. We also discuss the potential implications of increasing opportunity to utilize routinely available data for the conduct of trial-based economic evaluations.

  7. The Rise of Iran

    DEFF Research Database (Denmark)

    Jakobsen, Peter Viggo; Rahigh-Aghsan, Ali

    2010-01-01

    Iran is viewed by many as a rising power that poses an increasing threat to regional and even global security. This view is wrong for three reasons. Iran's hard and soft power is exaggerated by most accounts; it is too limited to allow the Iranians to dominate the Persian Gulf let alone the Middle...... East, and its brand of Shi‘ism has very limited appeal outside of Iran. Second, growing internal political and economic instability will seriously limit Iran's bid for regional dominance. Third, the failure to stop the Iranian nuclear program has led analysts to underestimate the ability of the other...... regional powers and the West to balance Iran and contain its influence, even if it acquires nuclear weapons. If these limitations on Iranian power are taken into account the rise seems destined to be a short one....

  8. Contemporary sea level rise.

    Science.gov (United States)

    Cazenave, Anny; Llovel, William

    2010-01-01

    Measuring sea level change and understanding its causes has considerably improved in the recent years, essentially because new in situ and remote sensing observations have become available. Here we report on most recent results on contemporary sea level rise. We first present sea level observations from tide gauges over the twentieth century and from satellite altimetry since the early 1990s. We next discuss the most recent progress made in quantifying the processes causing sea level change on timescales ranging from years to decades, i.e., thermal expansion of the oceans, land ice mass loss, and land water-storage change. We show that for the 1993-2007 time span, the sum of climate-related contributions (2.85 +/- 0.35 mm year(-1)) is only slightly less than altimetry-based sea level rise (3.3 +/- 0.4 mm year(-1)): approximately 30% of the observed rate of rise is due to ocean thermal expansion and approximately 55% results from land ice melt. Recent acceleration in glacier melting and ice mass loss from the ice sheets increases the latter contribution up to 80% for the past five years. We also review the main causes of regional variability in sea level trends: The dominant contribution results from nonuniform changes in ocean thermal expansion.

  9. Large Volcanic Rises on Venus

    Science.gov (United States)

    Smrekar, Suzanne E.; Kiefer, Walter S.; Stofan, Ellen R.

    1997-01-01

    Large volcanic rises on Venus have been interpreted as hotspots, or the surface manifestation of mantle upwelling, on the basis of their broad topographic rises, abundant volcanism, and large positive gravity anomalies. Hotspots offer an important opportunity to study the behavior of the lithosphere in response to mantle forces. In addition to the four previously known hotspots, Atla, Bell, Beta, and western Eistla Regiones, five new probable hotspots, Dione, central Eistla, eastern Eistla, Imdr, and Themis, have been identified in the Magellan radar, gravity and topography data. These nine regions exhibit a wider range of volcano-tectonic characteristics than previously recognized for venusian hotspots, and have been classified as rift-dominated (Atla, Beta), coronae-dominated (central and eastern Eistla, Themis), or volcano-dominated (Bell, Dione, western Eistla, Imdr). The apparent depths of compensation for these regions ranges from 65 to 260 km. New estimates of the elastic thickness, using the 90 deg and order spherical harmonic field, are 15-40 km at Bell Regio, and 25 km at western Eistla Regio. Phillips et al. find a value of 30 km at Atla Regio. Numerous models of lithospheric and mantle behavior have been proposed to interpret the gravity and topography signature of the hotspots, with most studies focusing on Atla or Beta Regiones. Convective models with Earth-like parameters result in estimates of the thickness of the thermal lithosphere of approximately 100 km. Models of stagnant lid convection or thermal thinning infer the thickness of the thermal lithosphere to be 300 km or more. Without additional constraints, any of the model fits are equally valid. The thinner thermal lithosphere estimates are most consistent with the volcanic and tectonic characteristics of the hotspots. Estimates of the thermal gradient based on estimates of the elastic thickness also support a relatively thin lithosphere (Phillips et al.). The advantage of larger estimates of

  10. High School Students' Accuracy in Estimating the Cost of College: A Proposed Methodological Approach and Differences among Racial/Ethnic Groups and College Financial-Related Factors

    Science.gov (United States)

    Nienhusser, H. Kenny; Oshio, Toko

    2017-01-01

    High school students' accuracy in estimating the cost of college (AECC) was examined by utilizing a new methodological approach, the absolute-deviation-continuous construct. This study used the High School Longitudinal Study of 2009 (HSLS:09) data and examined 10,530 11th grade students in order to measure their AECC for 4-year public and private…

  11. A Simple and Improved HPLC-PDA Method for Simultaneous Estimation of Fexofenadine and Pseudoephedrine in Extended Release Tablets by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Ruhul Kayesh

    2017-01-01

    Full Text Available A simple RP-HPLC method has been developed for simultaneous estimation of fexofenadine and pseudoephedrine in their extended release tablet. The method was developed based on statistical design of experiments (DoE and Response Surface Methodology. Separation was achieved on double end-capped C18 column (250 mm × 4 mm, 5 μm. In this experiment, two components of mobile phase, namely, acetonitrile (% v/v and methanol (% v/v, were the factors whereas retention and resolution of the chromatographic peaks were the responses. The effects of different composition of factors on the corresponding responses were investigated. The optimum chromatographic condition for the current case was found as an isocratic mobile phase consisting of 20 mM phosphate buffer (pH 6.8 and acetonitrile and methanol in a ratio of 50 : 36 : 14 (% v/v at a flow rate of 1 mL/min for 7 minutes. The retention of pseudoephedrine and fexofenadine was found to be 2.6 min and 4.7 min, respectively. The method was validated according to the ICH and FDA guidelines and various validation parameters were determined. Also, forced degradation studies in acid, base, oxidation, and reduction media and in thermal condition were performed to establish specificity and stability-indicating property of this method. Practical applicability of this method was checked in extended release tablets available in Bangladeshi market.

  12. The Norwegian Emission Inventory 2012. Documentation of methodologies for estimating emissions of greenhouse gases and long-range transboundary air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sandmo, Trond (ed.)

    2012-07-01

    The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency1 and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emission models like, e.g., the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF (land use, land-use change and forestry) is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2012) for documentation on this topic.This report replaces the previous documentation of the emission model (Sandmo 2011), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: Minor NOx emissions from production of rock wool, which previously not have been estimated, have been included, Some factors for estimation of N2O from agriculture have been altered

  13. The Norwegian Emission Inventory 2012. Documentation of methodologies for estimating emissions of greenhouse gases and long-range transboundary air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sandmo, Trond [ed.

    2012-07-01

    The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency1 and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emission models like, e.g., the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF (land use, land-use change and forestry) is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2012) for documentation on this topic.This report replaces the previous documentation of the emission model (Sandmo 2011), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: Minor NOx emissions from production of rock wool, which previously not have been estimated, have been included, Some factors for estimation of N2O from agriculture have been altered, The

  14. Sensitivity analysis of hydrogeological parameters affecting groundwater storage change caused by sea level rise

    Science.gov (United States)

    Shin, J.; Kim, K.-H.; Lee, K.-K.

    2012-04-01

    Sea level rise, which is one of the representative phenomena of climate changes caused by global warming, can affect groundwater system. The rising trend of the sea level caused by the global warming is reported to be about 3 mm/year for the most recent 10 year average (IPCC, 2007). The rate of sea level rise around the Korean peninsula is reported to be 2.30±2.22 mm/yr during the 1960-1999 period (Cho, 2002) and 2.16±1.77 mm/yr (Kim et al., 2009) during the 1968-2007 period. Both of these rates are faster than the 1.8±0.5 mm/yr global average for the similar 1961-2003 period (IPCC, 2007). In this study, we analyzed changes in the groundwater environment affected by the sea level rise by using an analytical methodology. We tried to find the most effective parameters of groundwater amount change in order to estimate the change in fresh water amount in coastal groundwater. A hypothetical island model of a cylindrical shape in considered. The groundwater storage change is bi-directional as the sea level rises according to the natural and hydrogeological conditions. Analysis of the computation results shows that topographic slope and hydraulic conductivity are the most sensitive factors. The contributions of the groundwater recharge rate and the thickness of aquifer below sea level are relatively less effective. In the island with steep seashore slopes larger than 1~2 degrees or so, the storage amount of fresh water in a coastal area increases as sea level rises. On the other hand, when sea level drops, the storage amount decreases. This is because the groundwater level also rises with the rising sea level in steep seashores. For relatively flat seashores, where the slope is smaller than around 1-2 degrees, the storage amount of coastal fresh water decreases when the sea level rises because the area flooded by the rising sea water is increased. The volume of aquifer fresh water in this circumstance is greatly reduced in proportion to the flooded area with the sea

  15. Coal prices rise

    International Nuclear Information System (INIS)

    McLean, A.

    2001-01-01

    Coking and semi hard coking coal price agreements had been reached, but, strangely enough, the reaching of common ground on semi soft coking coal, ultra low volatile coal and thermal coal seemed some way off. More of this phenomenon later, but suffice to say that, traditionally, the semi soft and thermal coal prices have fallen into place as soon as the hard, or prime, coking coal prices have been determined. The rise and rise of the popularity of the ultra low volatile coals has seen demand for this type of coal grow almost exponentially. Perhaps one of the most interesting facets of the coking coal settlements announced to date is that the deals appear almost to have been preordained. The extraordinary thing is that the preordination has been at the prescience of the sellers. Traditionally, coking coal price fixing has been the prerogative of the Japanese Steel Mills (JSM) cartel (Nippon, NKK, Kawasaki, Kobe and Sumitomo) who presented a united front to a somewhat disorganised force of predominantly Australian and Canadian sellers. However, by the time JFY 2001 had come round, the rules of the game had changed

  16. The rise of Chrome

    Directory of Open Access Journals (Sweden)

    Jonathan Tamary

    2015-10-01

    Full Text Available Since Chrome’s initial release in 2008 it has grown in market share, and now controls roughly half of the desktop browsers market. In contrast with Internet Explorer, the previous dominant browser, this was not achieved by marketing practices such as bundling the browser with a pre-loaded operating system. This raises the question of how Chrome achieved this remarkable feat, while other browsers such as Firefox and Opera were left behind. We show that both the performance of Chrome and its conformance with relevant standards are typically better than those of the two main contending browsers, Internet Explorer and Firefox. In addition, based on a survey of the importance of 25 major features, Chrome product managers seem to have made somewhat better decisions in selecting where to put effort. Thus the rise of Chrome is consistent with technical superiority over the competition.

  17. Plume rise predictions

    International Nuclear Information System (INIS)

    Briggs, G.A.

    1976-01-01

    Anyone involved with diffusion calculations becomes well aware of the strong dependence of maximum ground concentrations on the effective stack height, h/sub e/. For most conditions chi/sub max/ is approximately proportional to h/sub e/ -2 , as has been recognized at least since 1936 (Bosanquet and Pearson). Making allowance for the gradual decrease in the ratio of vertical to lateral diffusion at increasing heights, the exponent is slightly larger, say chi/sub max/ approximately h/sub e/ - 2 . 3 . In inversion breakup fumigation, the exponent is somewhat smaller; very crudely, chi/sub max/ approximately h/sub e/ -1 . 5 . In any case, for an elevated emission the dependence of chi/sub max/ on h/sub e/ is substantial. It is postulated that a really clever ignorant theoretician can disguise his ignorance with dimensionless constants. For most sources the effective stack height is considerably larger than the actual source height, h/sub s/. For instance, for power plants with no downwash problems, h/sub e/ is more than twice h/sub s/ whenever the wind is less than 10 m/sec, which is most of the time. This is unfortunate for anyone who has to predict ground concentrations, for he is likely to have to calculate the plume rise, Δh. Especially when using h/sub e/ = h/sub s/ + Δh instead of h/sub s/ may reduce chi/sub max/ by a factor of anywhere from 4 to infinity. Factors to be considered in making plume rise predictions are discussed

  18. The 1988 coal outlook: steadily rising consumption

    Energy Technology Data Exchange (ETDEWEB)

    Soras, C.G.; Stodden, J.R.

    1987-12-01

    Total coal use - domestic and foreign - will reach 910 million tons in 1988, an expansion of 1.3% from an estimated 898 million tons in 1987. The overall rise in consumption will add to inventory needs. Moreover, lower interest rates cut effective carrying costs and further encourage the holding of coal stocks by users. The results will be a gain in inventories of 3.5 tons by the end of 1988. As a result of all these factors, coal production is anticipated to rise by 11.6 million tons, or 1.2%, which projects firm markets in a time of relatively soft economic conditions in the USA. 2 tabs.

  19. Estimation of the Joint Patient Condition Occurrence Frequencies from Operation Iraqi Freedom and Operation Enduring Freedom. Volume I: Development of Methodology

    Science.gov (United States)

    2011-03-28

    Chest PCOF Vol. 1: Development of Methodology 18 Supplemented BM To more accurately describe combat trauma , a slight modification was made to the BM... Pneumothorax without Open Wound into Thorax INTERNAL ORGAN CHEST 860.1 Traumatic Pneumothorax with open Wound into Thorax INTERNAL ORGAN CHEST ...with Open Wound into Thorax INTERNAL ORGAN CHEST PCOF Vol. 1: Development of Methodology 31 DMMPO ICD-9 codes Trauma category Anatomical location

  20. Methow River Studies, Washington: abundance estimates from Beaver Creek and the Chewuch River screw trap, methodology testing in the Whitefish Island side channel, and survival and detection estimates from hatchery fish releases, 2013

    Science.gov (United States)

    Martens, Kyle D.; Fish, Teresa M.; Watson, Grace A.; Connolly, Patrick J.

    2014-01-01

    , leaving one large pool near the bottom of the side channel and several shallow isolated pools that may or may not go dry. In seasonally connected side channels, juvenile salmonid survival in pools less than 100 cm average depth was lower than in pools greater than 100 cm average depth (Martens and Connolly, 2014). In this report, we document our field work and analysis completed in 2013. During 2013, USGS sampling efforts were focused on resampling of three reaches in Beaver Creek, testing methodology in the Whitefish Island side channel, conducting hatchery survival estimates, and operating a screw trap on the Chewuch River (funded by Yakama Nation; fig. 1). The Beaver Creek sampling effort was a revisit of three index sites sampled continuously from 2004 to 2007 to look at the fish response to barrier removal. Methodology testing in Whitefish Island side channel was done to determine the best method for evaluating fish populations after restoration efforts in side channels (previous sampling methods were determined to be ineffective after pools were deepened). Hatchery survival estimates were completed to monitor fish survival in the Methow and Columbia Rivers, while the screw trap was operated to estimate migrating fish populations in the Chewuch River and track passive integrated transponder (PIT)-tagged fish. In addition, we maintained a network of PIT-tag interrogation systems (PTIS), assisted Reclamation with fish removal events associated with stream restoration (two people for 9 days; 14 percent of summer field season), and conducted a stream metabolism study designed to help parameterize and calibrate the stream productivity model (Bellmore and others, 2014) with model validation.

  1. Proposed methodology for estimating the in HDR brachytherapy facilities Ir-192; Propuesta de metodologia para estimar la dosis absorbida en la entrada del laberinto en instalaciones de braquiterapia HDR con Ir-192

    Energy Technology Data Exchange (ETDEWEB)

    Pujades-Clamarchirant, M. C.; Perez-Calatayud, J.; Ballester, F.; Gimeno, J.; Granero, D.; Camacho, C.; Lliso, F.; Carmona, V.; Vijande, J.

    2011-07-01

    In the absence of procedures for assessing the design of a room brachytherapy (BT) with maze, usually adopting the formalism of external irradiation with different variations, The purpose of this study is to adapt the methodology of NCRP151 [1] to estimate the absorbed dose at the entrance to a room of ET and compare with the corresponding dosimetry data obtained with Monte Carlo (MC) in a previous work.

  2. Sea level rise and the geoid: factor analysis approach

    Directory of Open Access Journals (Sweden)

    Alexey Sadovski

    2013-08-01

    Full Text Available Sea levels are rising around the world, and this is a particular concern along most of the coasts of the United States. A 1989 EPA report shows that sea levels rose 5-6 inches more than the global average along the Mid-Atlantic and Gulf Coasts in the last century. The main reason for this is coastal land subsidence. This sea level rise is considered more as relative sea level rise than global sea level rise. Thus, instead of studying sea level rise globally, this paper describes a statistical approach by using factor analysis of regional sea level rates of change. Unlike physical models and semi-empirical models that attempt to approach how much and how fast sea levels are changing, this methodology allows for a discussion of the factor(s that statistically affects sea level rates of change, and seeks patterns to explain spatial correlations.

  3. Loss-of-Use Damages From U.S. Nuclear Testing in the Marshall Islands: Technical Analysis of the Nuclear Claims Tribunal’s Methodology and Alternative Estimates

    Science.gov (United States)

    2005-08-12

    productivity of the islands in producing copra or fish, was not considered. The assumption is also inconsistent with the capitalization model that the value of...David Barker and Jay Wa-Aadu, “Is Real Estate Becoming Important Again? A Neo Ricardian Model of Land Rent.” Real Estate Economics, Spring, 2004, pp...the model explicit, it avoids shortcomings of the NCT methodology, by using available data from RMI’s national income and product accounts that is

  4. Estimating the ROI on Implementation of RFID at the Ammunition Storage Warehouse and the 40th Supply Depot: KVA as a Methodology

    Science.gov (United States)

    2009-12-01

    Balanced Scorecard CAPM Capital Asset Pricing Model DIS Defense Information System DoD Department of...Measurement Tool (PMT) is the Balanced Scorecard (BSC) based on critical success factors and key performance indicators. The MND has referred to Jung’s...authors can replicate the methodology for multiple projects to generate a portfolio of projects. Similar to the Capital Asset Pricing Model ( CAPM ) or

  5. On Capillary Rise and Nucleation

    Science.gov (United States)

    Prasad, R.

    2008-01-01

    A comparison of capillary rise and nucleation is presented. It is shown that both phenomena result from a balance between two competing energy factors: a volume energy and a surface energy. Such a comparison may help to introduce nucleation with a topic familiar to the students, capillary rise. (Contains 1 table and 3 figures.)

  6. Plume rise from multiple sources

    International Nuclear Information System (INIS)

    Briggs, G.A.

    1975-01-01

    A simple enhancement factor for plume rise from multiple sources is proposed and tested against plume-rise observations. For bent-over buoyant plumes, this results in the recommendation that multiple-source rise be calculated as [(N + S)/(1 + S)]/sup 1/3/ times the single-source rise, Δh 1 , where N is the number of sources and S = 6 (total width of source configuration/N/sup 1/3/ Δh 1 )/sup 3/2/. For calm conditions a crude but simple method is suggested for predicting the height of plume merger and subsequent behavior which is based on the geometry and velocity variations of a single buoyant plume. Finally, it is suggested that large clusters of buoyant sources might occasionally give rise to concentrated vortices either within the source configuration or just downwind of it

  7. A Novel Methodology to Estimate Single-Tree Biophysical Parameters from 3D Digital Imagery Compared to Aerial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Rocío Hernández-Clemente

    2014-11-01

    Full Text Available Airborne laser scanner (ALS data provide an enhanced capability to remotely map two key variables in forestry: leaf area index (LAI and tree height (H. Nevertheless, the cost, complexity and accessibility of this technology are not yet suited for meeting the broad demands required for estimating and frequently updating forest data. Here we demonstrate the capability of alternative solutions based on the use of low-cost color infrared (CIR cameras to estimate tree-level parameters, providing a cost-effective solution for forest inventories. ALS data were acquired with a Leica ALS60 laser scanner and digital aerial imagery (DAI was acquired with a consumer-grade camera modified for color infrared detection and synchronized with a GPS unit. In this paper we evaluate the generation of a DAI-based canopy height model (CHM from imagery obtained with low-cost CIR cameras using structure from motion (SfM and spatial interpolation methods in the context of a complex canopy, as in forestry. Metrics were calculated from the DAI-based CHM and the DAI-based Normalized Difference Vegetation Index (NDVI for the estimation of tree height and LAI, respectively. Results were compared with the models estimated from ALS point cloud metrics. Field measurements of tree height and effective leaf area index (LAIe were acquired from a total of 200 and 26 trees, respectively. Comparable accuracies were obtained in the tree height and LAI estimations using ALS and DAI data independently. Tree height estimated from DAI-based metrics (Percentile 90 (P90 and minimum height (MinH yielded a coefficient of determination (R2 = 0.71 and a root mean square error (RMSE = 0.71 m while models derived from ALS-based metrics (P90 yielded an R2 = 0.80 and an RMSE = 0.55 m. The estimation of LAI from DAI-based NDVI using Percentile 99 (P99 yielded an R2 = 0.62 and an RMSE = 0.17 m2/m−2. A comparative analysis of LAI estimation using ALS-based metrics (laser penetration index

  8. Rise of a cold plume

    International Nuclear Information System (INIS)

    Kakuta, Michio

    1977-06-01

    The rise of smoke from the stacks of two research reactors in normal operation was measured by photogrametric method. The temperature of effluent gas is less than 20 0 C higher than that of the ambient air (heat emission of the order 10 4 cal s -1 ), and the efflux velocity divided by the wind speed is between 0.5 and 2.8 in all 16 smoke runs. The field data obtained within downwind distance of 150m are compared with those by plume rise formulas presently available. Considering the shape of bending-over plume, the Briggs' formula for 'jet' gives a reasonable explanation of the observed plume rise. (auth.)

  9. Methodology for modelling plug-in electric vehicles in the power system and cost estimates for a system with either smart or dumb electric vehicles

    DEFF Research Database (Denmark)

    Kiviluoma, Juha; Meibom, Peter

    2011-01-01

    The article estimates the costs of plug-in electric vehicles (EVs) in a future power system as well as the benefits from smart charging and discharging EVs (smart EVs). To arrive in a good estimate, a generation planning model was used to create power plant portfolios, which were operated in a more...... detailed unit commitment and dispatch model. In both models the charging and discharging of EVs is optimised together with the rest of the power system. Neither the system cost nor the market price of electricity for EVs turned out to be high (36–263 €/vehicle/year in the analysed scenarios). Most...

  10. Regional Shelter Analysis Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dennison, Deborah [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Jave [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walker, Hoyt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Miller, Paul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-01

    The fallout from a nuclear explosion has the potential to injure or kill 100,000 or more people through exposure to external gamma (fallout) radiation. Existing buildings can reduce radiation exposure by placing material between fallout particles and exposed people. Lawrence Livermore National Laboratory was tasked with developing an operationally feasible methodology that could improve fallout casualty estimates. The methodology, called a Regional Shelter Analysis, combines the fallout protection that existing buildings provide civilian populations with the distribution of people in various locations. The Regional Shelter Analysis method allows the consideration of (a) multiple building types and locations within buildings, (b) country specific estimates, (c) population posture (e.g., unwarned vs. minimally warned), and (d) the time of day (e.g., night vs. day). The protection estimates can be combined with fallout predictions (or measurements) to (a) provide a more accurate assessment of exposure and injury and (b) evaluate the effectiveness of various casualty mitigation strategies. This report describes the Regional Shelter Analysis methodology, highlights key operational aspects (including demonstrating that the methodology is compatible with current tools), illustrates how to implement the methodology, and provides suggestions for future work.

  11. A new methodology for the estimation of fiber populations in the white matter of the brain with the Funk-Radon transform

    Science.gov (United States)

    Tristán-Vega, Antonio; Westin, Carl-Fredrik; Aja-Fernández, Santiago

    2014-01-01

    The Funk-Radon Transform (FRT) is a powerful tool for the estimation of fiber populations with High Angular Resolution Diffusion Imaging (HARDI). It is used in Q-Ball imaging (QBI), and other HARDI techniques such as the recent Orientation Probability Density Transform (OPDT), to estimate fiber populations with very few restrictions on the diffusion model. The FRT consists in the integration of the attenuation signal, sampled by the MRI scanner on the unit sphere, along equators orthogonal to the directions of interest. It is easily proved that this calculation is equivalent to the integration of the diffusion propagator along such directions, although a characteristic blurring with a Bessel kernel is introduced. Under a different point of view, the FRT can be seen as an efficient way to compute the angular part of the integral of the attenuation signal in the plane orthogonal to each direction of the diffusion propagator. In this paper, Stoke’s theorem is used to prove that the FRT can in fact be used to compute accurate estimates of the true integrals defining the functions of interest in HARDI, keeping the diffusion model as little restrictive as possible. Varying the assumptions on the attenuation signal, we derive new estimators of fiber orientations, generalizing both Q-Balls and the OPDT. Extensive experiments with both synthetic and real data have been intended to show that the new techniques improve existing ones in many situations. PMID:19815078

  12. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  13. Response Surface Methodology

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.

    2014-01-01

    Abstract: This chapter first summarizes Response Surface Methodology (RSM), which started with Box and Wilson’s article in 1951 on RSM for real, non-simulated systems. RSM is a stepwise heuristic that uses first-order polynomials to approximate the response surface locally. An estimated polynomial

  14. Methodology for estimating aquifer recharge direct mfisurado system by the method of ripple of free (WTF) surface Mina Arenal departamento de Rivera

    International Nuclear Information System (INIS)

    Iardino, G.; González, G.; Montaño, J.

    2010-01-01

    The study area is framed in the so-called Crystal Island Rivera region of economic importance to Uruguay considering the gold occurrences . The company Uruguay Mineral Exploration Inc. (UME ) develops prospective exploration projects in the region since 1997 The discovery of a deposit of gold ore results in the opening of the Arenal mine near the town of Minas de Corrales, Department Rivera .Income groundwater to the quarry would affect the progress of the mineral extractive work to control this situation and monitors the evolution of a monitoring program is established with the potentiometric measurement levels and hydrological variables meteorelógicas.The data obtained allowed to estimate the direct recharge of the aquifer system in the area comprising the operational work of the Arenal mine, by applying the method of free surface fluctuation (WTF ) .The WTF method provides an estimate of groundwater recharge by analyzing water level fluctuations in observation wells

  15. ‘Small Area Social Indicators for the Indigenous Population: Synthetic data methodology for creating small area estimates of Indigenous disadvantage’

    OpenAIRE

    Yogi Vidyattama; Robert Tanton; Nicholas Biddle

    2013-01-01

    The lack of data on how the social condition of Indigenous people varies throughout Australia has created difficulties in allocating government and community programs across Indigenous communities. In the past, spatial microsimulation has been used to derive small area estimates to overcome such difficulties. However, for previous applications, a record unit file from a survey dataset has always been available on which to conduct the spatial microsimulation. For the case of indigenous disadva...

  16. The Improvement of the Methodological Approaches to Calculating the Payback Period for Investment in order to Estimate Expenses on Establishing the Economic Security Service of an Enterprise

    Directory of Open Access Journals (Sweden)

    Melikhova Tetiana O.

    2018-03-01

    Full Text Available The aim of the article is to improve the methodological approaches to calculating the payback period for investment in order to determine the payback period for expenses on establishing the economic security service of an enterprise. It is found that the source of payback of investment at the enterprise level is cash flow product. These revenues (the result go to formation of a cash flow (expenses used to finance investment and financial activities. There proposed methods for determining the gross, net, actual, and specified payback periods for advanced investments in the long-term, which use the accumulated product of cash flow or accumulated cash flow as a source of financing. Analytic relationships between the gross, net, current, and specified payback periods for advanced investments that take into account the relationship between the accumulated gross, net, current and specified cash flows are proposed. The considered options for payback of advanced investment at the enterprise level will provide an opportunity to develop methods for determining the payback period for expenses on establishing the economic security service of an enterprise.

  17. Improvement of radiological consequence estimation methodologies for NPP accidents in the ARGOS and RODOS decision support systems through consideration of contaminant physico-chemical forms

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, K.G.; Roos, P. [Technical University of Denmark - DTU (Denmark); Lind, O.C.; Salbu, B. [Norwegian University of Life Sciences/CERAD - NMBU (Norway); Bujan, A.; Duranova, T. [VUJE, Inc. (Slovakia); Ikonomopoulos, A.; Andronopoulos, S. [National Centre for Scientific Research ' Demokritos' (Greece)

    2014-07-01

    The European standard computerized decision support systems RODOS and ARGOS, which are integrated in the operational nuclear emergency preparedness in practically all European countries, as well as in a range of non-European countries, are highly valuable tools for radiological consequence estimation, e.g., in connection with planning and exercising as well as in justification and optimization of intervention strategies. Differences between the Chernobyl and Fukushima accident atmospheric release source terms have demonstrated that differences in release conditions and processes may lead to very different degrees of volatilization of some radionuclides. Also the physico-chemical properties of radionuclides released can depend strongly on the release process. An example from the Chernobyl accident of the significance of this is that strontium particles released in the fire were oxidized and thus generally physico-chemically different from those released during the preceding explosion. This is reflected in the very different environmental mobility of the two groups of particles. The initial elemental matrix characteristics of the contaminants, as well as environmental parameters like pH, determine for instance the particle dissolution time functions, and thus the environmental mobility and potential for uptake in living organisms. As ICRP recommends optimization of intervention according to residual dose, it is crucial to estimate long term dose contributions adequately. In the EURATOM FP7 project PREPARE, an effort is made to integrate physico-chemical forms of contaminants in scenario-specific source term determination, thereby enabling consideration of influences on atmospheric dispersion/deposition, post-deposition migration, and effectiveness of countermeasure implementation. The first step in this context was to investigate, based on available experience, the important physico-chemical properties of radio-contaminants that might potentially be released to the

  18. On methodology

    DEFF Research Database (Denmark)

    Cheesman, Robin; Faraone, Roque

    2002-01-01

    This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública".......This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública"....

  19. Analysis of coastal protection under rising flood risk

    Directory of Open Access Journals (Sweden)

    Megan J. Lickley

    2014-01-01

    Full Text Available Infrastructure located along the U.S. Atlantic and Gulf coasts is exposed to rising risk of flooding from sea level rise, increasing storm surge, and subsidence. In these circumstances coastal management commonly based on 100-year flood maps assuming current climatology is no longer adequate. A dynamic programming cost–benefit analysis is applied to the adaptation decision, illustrated by application to an energy facility in Galveston Bay. Projections of several global climate models provide inputs to estimates of the change in hurricane and storm surge activity as well as the increase in sea level. The projected rise in physical flood risk is combined with estimates of flood damage and protection costs in an analysis of the multi-period nature of adaptation choice. The result is a planning method, using dynamic programming, which is appropriate for investment and abandonment decisions under rising coastal risk.

  20. The Norwegian Emission Inventory 2011. Documentation of methodologies for estimating emissions of greenhouse gases and long-range transboundary air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sandmo, Trond

    2012-07-01

    The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency1 and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emission models like, e.g., the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2011b) for documentation on this topic. This report replaces the previous documentation of the emission model (Sandmo 2010), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: To define the different economic sectors in the Norwegian emission model, the standard industrial classification SIC2007 has replaced the previous SIC2002 (Appendix F) A new model for calculating emissions to air (HBEFA

  1. The Norwegian Emission Inventory 2011. Documentation of methodologies for estimating emissions of greenhouse gases and long-range transboundary air pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Sandmo, Trond

    2012-07-01

    The Norwegian emission inventory is a joint undertaking between the Climate and Pollution Agency1 and Statistics Norway. Statistics Norway is responsible for the collection and development of activity data, and emission figures are derived from models operated by Statistics Norway. The Climate and Pollution Agency is responsible for the emission factors, for providing data from specific industries and sources and for considering the quality, and assuring necessary updating, of emission models like, e.g., the road traffic model and calculation of methane emissions from landfills. Emission data are used for a range of national applications and for international reporting. The Climate and Pollution Agency is responsible for the Norwegian reporting to United Nations Framework Convention on Climate Change (UNFCCC) and to United Nations Economic Commission Europe (UN-ECE). This report documents the methodologies used in the Norwegian emission inventory of greenhouse gases (GHG), acidifying pollutants, heavy metals (HM) and persistent organic pollutants (POPs). The documentation will also serve as a part of the National Inventory Report submitted by Norway to the United Nations Framework Convention on Climate Change (UNFCCC), and as documentation of the reported emissions to UNECE for the pollutants restricted by CLRTAP (Convention on Long-Range Transboundary Air Pollution). LULUCF is not considered in this report, see the National Inventory Report (Climate and Pollution Agency 2011b) for documentation on this topic. This report replaces the previous documentation of the emission model (Sandmo 2010), and is the latest annually updated version of a report edited by Britta Hoem in 2005. The most important changes since last year's documentation are: To define the different economic sectors in the Norwegian emission model, the standard industrial classification SIC2007 has replaced the previous SIC2002 (Appendix F) A new model for calculating emissions to air (HBEFA) from

  2. Genomic prediction using different estimation methodology, blending and cross-validation techniques for growth traits and visual scores in Hereford and Braford cattle.

    Science.gov (United States)

    Campos, G S; Reimann, F A; Cardoso, L L; Ferreira, C E R; Junqueira, V S; Schmidt, P I; Braccini Neto, J; Yokoo, M J I; Sollero, B P; Boligon, A A; Cardoso, F F

    2018-05-07

    The objective of the present study was to evaluate the accuracy and bias of direct and blended genomic predictions using different methods and cross-validation techniques for growth traits (weight and weight gains) and visual scores (conformation, precocity, muscling and size) obtained at weaning and at yearling in Hereford and Braford breeds. Phenotypic data contained 126,290 animals belonging to the Delta G Connection genetic improvement program, and a set of 3,545 animals genotyped with the 50K chip and 131 sires with the 777K. After quality control, 41,045 markers remained for all animals. An animal model was used to estimate (co)variances components and to predict breeding values, which were later used to calculate the deregressed estimated breeding values (DEBV). Animals with genotype and phenotype for the traits studied were divided into four or five groups by random and k-means clustering cross-validation strategies. The values of accuracy of the direct genomic values (DGV) were moderate to high magnitude for at weaning and at yearling traits, ranging from 0.19 to 0.45 for the k-means and 0.23 to 0.78 for random clustering among all traits. The greatest gain in relation to the pedigree BLUP (PBLUP) was 9.5% with the BayesB method with both the k-means and the random clustering. Blended genomic value accuracies ranged from 0.19 to 0.56 for k-means and from 0.21 to 0.82 for random clustering. The analyzes using the historical pedigree and phenotypes contributed additional information to calculate the GEBV and in general, the largest gains were for the single-step (ssGBLUP) method in bivariate analyses with a mean increase of 43.00% among all traits measured at weaning and of 46.27% for those evaluated at yearling. The accuracy values for the marker effects estimation methods were lower for k-means clustering, indicating that the training set relationship to the selection candidates is a major factor affecting accuracy of genomic predictions. The gains in

  3. Chapter three: methodology of exposure modeling

    CSIR Research Space (South Africa)

    Moschandreas, DJ

    2002-12-01

    Full Text Available methodologies and models are reviewed. Three exposure/measurement methodologies are assessed. Estimation methods focus on source evaluation and attribution, sources include those outdoors and indoors as well as in occupational and in-transit environments. Fate...

  4. The Rise of Blog Nation

    Science.gov (United States)

    Lum, Lydia

    2005-01-01

    This article reports on the growth of blogs in popular culture, and the fact that they are becoming more widely accepted in the media industry. The rise and popularity of blogs--short for "Web logs"--are causing journalism educators to overhaul their teachings. In fact, blogging's influence varies from one university program to the next, just like…

  5. Finding Rising and Falling Words

    NARCIS (Netherlands)

    Tjong Kim Sang, E.

    2016-01-01

    We examine two different methods for finding rising words (among which neologisms) and falling words (among which archaisms) in decades of magazine texts (millions of words) and in years of tweets (billions of words): one based on correlation coefficients of relative frequencies and time, and one

  6. AEGIS methodology and a perspective from AEGIS methodology demonstrations

    International Nuclear Information System (INIS)

    Dove, F.H.

    1981-03-01

    Objectives of AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) are to develop the capabilities needed to assess the post-closure safety of waste isolation in geologic formation; demonstrate these capabilities on reference sites; apply the assessment methodology to assist the NWTS program in site selection, waste package and repository design; and perform repository site analyses for the licensing needs of NWTS. This paper summarizes the AEGIS methodology, the experience gained from methodology demonstrations, and provides an overview in the following areas: estimation of the response of a repository to perturbing geologic and hydrologic events; estimation of the transport of radionuclides from a repository to man; and assessment of uncertainties

  7. Methodological guidelines

    International Nuclear Information System (INIS)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-01-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs

  8. Methodological guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-04-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs.

  9. Importance of Using Multiple Sampling Methodologies for Estimating of Fish Community Composition in Offshore Wind Power Construction Areas of the Baltic Sea

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Mathias H.; Gullstroem, Martin; Oehman, Marcus C. (Dept. of Zoology, Stockholm Univ., Stockholm (Sweden)); Asplund, Maria E. (Dept. of Marine Ecology, Goeteborg Univ., Kristineberg Marine Research Station, Fiskebaeckskil (Sweden))

    2007-12-15

    In this study a visual SCUBA investigation was conducted in Utgrunden 2, an area where windmills had not yet been constructed, and where the bottom mainly consisted of mud or sand with no or a sparse number of algae or mussel beds. A wind farm at Utgrunden 2 would alter the local habitat from a predominantly sandy soft-bottom habitat to an area in which artificial reef structures that resemble hard-bottom habitats is introduced, i.e., the steel foundations and possibly boulders for scour protection. The fish community that will develop over time would be expected to change to resemble the assemblages observed at Utgrunden 1 and hence not visible using trawling and echosound sampling technique. As the goal of EIA is to assess changes, following human development visual techniques is recommended as a complement when examining the environmental effects of offshore windpower. Otherwise important ecological changes may go unnoticed. For a comprehensive understanding of the ecological effects of windfarm developments it is recommended that a combination of sampling methods is applied and that this should be defined before an investigation commences. Although it is well established in the scientific literature that different sampling methods will give different estimations of fish community composition, environmental impact assessments of offshore windpower have been incorrectly interpreted. In the interpretation of the results of such assessments it is common that the findings are extrapolated by stakeholders and media to include a larger extent of the fish populations than what was intended. Therefore, to fully understand how windpower influences fish the underwater visual census technique is here put forward as a necessary complement to more widescreening fish sampling methods (e.g., gill nets, echo-sounds, trawling)

  10. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  11. Rise, stagnation, and rise of Danish women's life expectancy

    DEFF Research Database (Denmark)

    Lindahl-Jacobsen, Rune; Rau, Roland; Jeune, Bernard

    2016-01-01

    Health conditions change from year to year, with a general tendency in many countries for improvement. These conditions also change from one birth cohort to another: some generations suffer more adverse events in childhood, smoke more heavily, eat poorer diets, etc., than generations born earlier...... favor forecasts that hinge on cohort differences. We use a combination of age decomposition and exchange of survival probabilities between countries to study the remarkable recent history of female life expectancy in Denmark, a saga of rising, stagnating, and now again rising lifespans. The gap between...... female life expectancy in Denmark vs. Sweden grew to 3.5 y in the period 1975-2000. When we assumed that Danish women born 1915-1945 had the same survival probabilities as Swedish women, the gap remained small and roughly constant. Hence, the lower Danish life expectancy is caused by these cohorts...

  12. An Update of Sea Level Rise in the northwestern part of the Arabian Gulf

    Science.gov (United States)

    Alothman, Abdulaziz; Bos, Machiel; Fernandes, Rui

    2017-04-01

    Relative sea level variations in the northwestern part of the Arabian Gulf have been estimated in the past using no more than 10 to 15 years of observations. In Alothman et al. (2014), we have almost doubled the period to 28.7 years by examining all available tide gauge data in the area and constructing a mean gauge time-series from seven coastal tide gauges. We found for the period 1979-2007 a relative sea level rise of about 2mm/yr, which correspond to an absolute sea level rise of about 1.5mm/yr based on the vertical displacement of GNSS stations in the region. By taking into account the temporal correlations we concluded that previous published results underestimate the true sea level rate error in this area by a factor of 5-10. In this work, we discuss and update the methodology and results from Alothman et al. (2014), particularly by checking and extending the GNSS solutions. Since 3 of the 6 GPS stations used only started observing in the end of 2011, the longer time series have now significantly lower uncertainties in the estimated vertical rate. In addition, we compare our results with GRACE derived ocean bottom pressure time series which are a good proxy of the changes in water mass in this area over time.

  13. Dosimetric methodology of the ICRP

    International Nuclear Information System (INIS)

    Eckerman, K.F.

    1994-01-01

    Establishment of guidance for the protection of workers and members of the public from radiation exposures necessitates estimation of the radiation dose to tissues of the body at risk. The dosimetric methodology formulated by the International Commission on Radiological Protection (ICRP) is intended to be responsive to this need. While developed for radiation protection, elements of the methodology are often applied in addressing other radiation issues; e.g., risk assessment. This chapter provides an overview of the methodology, discusses its recent extension to age-dependent considerations, and illustrates specific aspects of the methodology through a number of numerical examples

  14. Comparison of the Bioavailability of Waste Laden Soils Using ''In Vivo'' ''In Vitro'' Analytical Methodology and Bioaccessibility of Radionuclides for Refinement of Exposure/Dose Estimates

    Energy Technology Data Exchange (ETDEWEB)

    P. J. Lioy; M. Gallo; P. Georgopoulos; R. Tate; B. Buckley

    1999-09-15

    The bioavailability of soil contaminants can be measured using in vitro or in vivo techniques. Since there was no standard method for intercomparison among laboratories, we compared two techniques for bioavailability estimation: in vitro dissolution and in vivo rat feeding model for a NIST-traceable soil material. Bioaccessibility was measured using a sequential soil extraction in synthetic analogues of human saliva, gastric and intestinal fluids. Bioavailability was measured in Sprague Dawley rats by determining metal levels in the major organs and urine, feces, and blood. Bioaccessibility was found to be a good indicator of relative metal bioavailability. Results are presented from bioaccessible experiments with Cesium in contaminated DOE soils, and total alpha and beta bioaccessibility. The results indicate that the modified methodology for bioaccessibility can be used for specific radionuclide analysis.

  15. REGULARITIES OF THE INFLUENCE OF ORGANIZATIONAL AND TECHNOLOGICAL FACTORS ON THE DURATION OF CONSTRUCTION OF HIGH-RISE MULTIFUNCTIONAL COMPLEXES

    Directory of Open Access Journals (Sweden)

    ZAIATS Yi. I.

    2015-10-01

    Full Text Available Problem statement. Technical and economic indexes of projects of construction of high-rise multifunctional complexes, namely: the duration of construction works and the cost of building products depends on the technology of construction works and method of construction organization, and on their choice influence the architectural and design, constructional and engineering made decisions. Purpose. To reveal the regularity of influence of organizational and technological factors on the duration of construction of high-rise multifunctional complexes in the conditions of dense city building. Conclusion. To reveal the regularity of the influence of organizational and technological factors (the height, the factor complexity of design of project and and estimate documentation, factor of complexity of construction works, the factor of complexity of control of investment and construction project, economy factor, comfort factor, factor of technology of projected solutions for the duration of the construction of high-rise multifunctional complexes (depending on their height: from 73,5 m to 100 m inclusively; from 100 m to 200 m inclusively allow us to quantitatively assess their influence and can be used in the development of the methodology of substantiation of the expediency and effectiveness of the realization of projects of high-rise construction in condition of compacted urban development, based on the consideration of the influence of organizational and technological aspects.

  16. MIRD methodology; Metodologia MIRD

    Energy Technology Data Exchange (ETDEWEB)

    Rojo, Ana M [Autoridad Regulatoria Nuclear, Buenos Aires (Argentina); Gomez Parada, Ines [Sociedad Argentina de Radioproteccion, Buenos Aires (Argentina)

    2004-07-01

    The MIRD (Medical Internal Radiation Dose) system was established by the Society of Nuclear Medicine of USA in 1960 to assist the medical community in the estimation of the dose in organs and tissues due to the incorporation of radioactive materials. Since then, 'MIRD Dose Estimate Report' (from the 1 to 12) and 'Pamphlets', of great utility for the dose calculations, were published. The MIRD system was planned essentially for the calculation of doses received by the patients during nuclear medicine diagnostic procedures. The MIRD methodology for the absorbed doses calculations in different tissues is explained.

  17. Diagnostics from three rising submillimeter bursts

    International Nuclear Information System (INIS)

    Zhou, Ai-Hua; Li, Jian-Ping; Wang, Xin-Dong

    2016-01-01

    In this paper we investigate three novel rising submillimeter (THz) bursts that occurred sequentially in Super Active Region NOAA 10486. The average rising rate of the flux density above 200 GHz is only 20 sfu GHz −1 (corresponding to spectral index α of 1.6) for the THz spectral components of the 2003 October 28 and November 4 bursts, but it attained values of 235 sfu GHz −1 (α = 4.8) in the 2003 November 2 burst. The steeply rising THz spectrum can be produced by a population of highly relativistic electrons with a low-energy cutoff of 1 MeV, but it only requires a low-energy cutoff of 30 keV for the two slowly rising THz bursts, via gyrosynchrotron (GS) radiation based on our numerical simulations of burst spectra in the magnetic dipole field case. The electron density variation is much larger in the THz source than in the microwave (MW) source. It is interesting that the THz source radius decreased by 20%–50% during the decay phase for the three events, but the MW source increased by 28% for the 2003 November 2 event. In the paper we will present a formula that can be used to calculate the energy released by ultrarelativistic electrons, taking the relativistic correction into account for the first time. We find that the energy released by energetic electrons in the THz source exceeds that in the MW source due to the strong GS radiation loss in the THz range, although the modeled THz source area is 3–4 orders smaller than the modeled MW source one. The total energies released by energetic electrons via the GS radiation in radio sources are estimated, respectively, to be 5.2 × 10 33 , 3.9 × 10 33 and 3.7 × 10 32 erg for the October 28, November 2 and 4 bursts, which are 131, 76 and 4 times as large as the thermal energies of 2.9 × 10 31 , 2.1 × 10 31 and 5.2 × 10 31 erg estimated from soft X-ray GOES observations. (paper)

  18. PSA methodology

    Energy Technology Data Exchange (ETDEWEB)

    Magne, L

    1997-12-31

    The purpose of this text is first to ask a certain number of questions on the methods related to PSAs. Notably we will explore the positioning of the French methodological approach - as applied in the EPS 1300{sup 1} and EPS 900{sup 2} PSAs - compared to other approaches (Part One). This reflection leads to more general reflection: what contents, for what PSA? This is why, in Part Two, we will try to offer a framework for definition of the criteria a PSA should satisfy to meet the clearly identified needs. Finally, Part Three will quickly summarize the questions approached in the first two parts, as an introduction to the debate. 15 refs.

  19. PSA methodology

    International Nuclear Information System (INIS)

    Magne, L.

    1996-01-01

    The purpose of this text is first to ask a certain number of questions on the methods related to PSAs. Notably we will explore the positioning of the French methodological approach - as applied in the EPS 1300 1 and EPS 900 2 PSAs - compared to other approaches (Part One). This reflection leads to more general reflection: what contents, for what PSA? This is why, in Part Two, we will try to offer a framework for definition of the criteria a PSA should satisfy to meet the clearly identified needs. Finally, Part Three will quickly summarize the questions approached in the first two parts, as an introduction to the debate. 15 refs

  20. The use of nanomodified concrete in construction of high-rise buildings

    Science.gov (United States)

    Prokhorov, Sergei

    2018-03-01

    Construction is one of the leading economy sectors. Currently, concrete is the basis of most of the structural elements, without which it is impossible to imagine the construction of a single building or facility. Their strength, reinforcement and the period of concrete lifetime are determined at the design stage, taking into account long-term operation. However, in real life, the number of impacts that affects the structural strength is pretty high. In some cases, they are random and do not have standardized values. This is especially true in the construction and exploitation of high-rise buildings and structures. Unlike the multi-storey buildings, they experience significant loads already at the stage of erection, as they support load-lifting mechanisms, formwork systems, workers, etc. The purpose of the presented article is to develop a methodology for estimating the internal fatigue of concrete structures based on changes in their electrical conductivity.

  1. Using GNSS for Assessment Recent Sea Level Rise in the Northwestern Part of the Arabian Gulf

    Science.gov (United States)

    Alothman, A. O.; Bos, M. S.; Fernandes, R.

    2017-12-01

    Due to the global warming acting recently (in the 21st century) on the planet Earth, an associated sea level rise is predicted to reach up to 30 cm to 60 cm in some regions. Sea level monitoring is important for the Kingdom of Saudi Arabia, since it is surrounded by very long cost of about 3400 km in length and hundreds of isolated islands. The eastern coast line of KSA, in the Arabian Gulf, needs some monitoring in the long term, due to low land nature of the region. Also, the ongoing oil withdrawal activities in the area, may affect the regional sea level rise. In addition to these two facts, the tectonic structure of the Arabian Peninsula is one factor. The Regional Relative sea level in the eastern cost of Saudi Arabia has been estimated in the past using tide gauge data of more than 28 years using the vertical displacement of permanent Global Navigation Satellite System GNSS stations having time span of only about 3 years. In this paper, we discuss and update the methodology and results from Alothman et al. (2014), particularly by checking and extending the GNSS solutions. Since 3 of the 6 GPS stations used only started observing in the end of 2011, the longer time series have now significantly lower uncertainties in the estimated vertical rate. Longer time span of GNSS observations were included and 500 synthetic time series were estimated and seasonal signals were analysed. it is concluded that the varying seasonal signal present in the GNSS time series causes an underestimation of 0.1 mm/yr for short time series of 3 years. In addition to the implications of using short time series to estimate the vertical land motion, we found that if the varying seasonal signals are present in the data, the problem is aggravated. This finding can be useful for other studies analyzing short GNSS time series.

  2. Relative Hazard Calculation Methodology

    International Nuclear Information System (INIS)

    DL Strenge; MK White; RD Stenner; WB Andrews

    1999-01-01

    The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

  3. The Climate Science Special Report: Rising Seas and Changing Oceans

    Science.gov (United States)

    Kopp, R. E.

    2017-12-01

    GMSL has risen by about 16-21 cm since 1900. Ocean heat content has increased at all depths since the 1960s, and global mean sea-surface temperature increased 0.7°C/century between 1900 to 2016. Human activity contributed substantially to generating a rate of GMSL rise since 1900 faster than during any preceding century in at least 2800 years. A new set of six sea-level rise scenarios, spanning a range from 30 cm to 250 cm of 21st century GMSL rise, were developed for the CSSR. The lower scenario is based on linearly extrapolating the past two decades' rate of rise. The upper scenario is informed by literature estimates of maximum physically plausible values, observations indicating the onset of marine ice sheet instability in parts of West Antarctica, and modeling of ice-cliff and ice-shelf instability mechanisms. The new scenarios include localized projections along US coastlines. There is significant variability around the US, with rates of rise likely greater than GMSL rise in the US Northeast and the western Gulf of Mexico. Under scenarios involving extreme Antarctic contributions, regional rise would be greater than GMSL rise along almost all US coastlines. Historical sea-level rise has already driven a 5- to 10-fold increase in minor tidal flooding in several US coastal cities since the 1960s. Under the CSSR's Intermediate sea-level rise scenario (1.0 m of GMSL rise in 2100) , a majority of NOAA tide gauge locations will by 2040 experience the historical 5-year coastal flood about 5 times per year. Ocean changes are not limited to rising sea levels. Ocean pH is decreasing at a rate that may be unparalleled in the last 66 million years. Along coastlines, ocean acidification can be enhanced by changes in the upwelling (particularly along the US Pacific Coast); by episodic, climate change-enhanced increases in freshwater input (particularly along the US Atlantic Coast); and by the enhancement of biological respiration by nutrient runoff. Climate models project

  4. METHODOLOGICAL PROBLEMS OF PRACTICAL RADIOGENIC RISK ESTIMATIONS

    Directory of Open Access Journals (Sweden)

    A. Т. Gubin

    2014-01-01

    Full Text Available Mathematical ratios were established according to the description of the calculation procedure for the values of the nominal risk coefficient given in the ICRP Recommendations 2007. It is shown that the lifetime radiogenic risk is a linear functional from the distribution of the dose in time with a multiplier descending with age. As a consequence, application of the nominal risk coefficient in the risk calculations is justified in the case when prolonged exposure is practically evenly distributed in time, and gives a significant deviation at a single exposure. When using the additive model of radiogenic risk proposed in the UNSCEAR Report 2006 for solid cancers, this factor is almost linearly decreasing with the age, which is convenient for its practical application.

  5. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    Science.gov (United States)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  6. Approximate analysis of high-rise frames with flexible connections

    NARCIS (Netherlands)

    Hoenderkamp, J.C.D.; Snijder, H.H.

    2000-01-01

    An approximate hand method for estimating horizontal deflections in high-rise steel frames with flexible beam–column connections subjected to horizontal loading is presented. The method is developed from the continuous medium theory for coupled walls which is expressed in non-dimensional structural

  7. Sea Level Rise Data Discovery

    Science.gov (United States)

    Quach, N.; Huang, T.; Boening, C.; Gill, K. M.

    2016-12-01

    Research related to sea level rise crosses multiple disciplines from sea ice to land hydrology. The NASA Sea Level Change Portal (SLCP) is a one-stop source for current sea level change information and data, including interactive tools for accessing and viewing regional data, a virtual dashboard of sea level indicators, and ongoing updates through a suite of editorial products that include content articles, graphics, videos, and animations. The architecture behind the SLCP makes it possible to integrate web content and data relevant to sea level change that are archived across various data centers as well as new data generated by sea level change principal investigators. The Extensible Data Gateway Environment (EDGE) is incorporated into the SLCP architecture to provide a unified platform for web content and science data discovery. EDGE is a data integration platform designed to facilitate high-performance geospatial data discovery and access with the ability to support multi-metadata standard specifications. EDGE has the capability to retrieve data from one or more sources and package the resulting sets into a single response to the requestor. With this unified endpoint, the Data Analysis Tool that is available on the SLCP can retrieve dataset and granule level metadata as well as perform geospatial search on the data. This talk focuses on the architecture that makes it possible to seamlessly integrate and enable discovery of disparate data relevant to sea level rise.

  8. Testing methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  9. Testing methodologies

    International Nuclear Information System (INIS)

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical ''signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs

  10. Probabilistic methodology for turbine missile risk analysis

    International Nuclear Information System (INIS)

    Twisdale, L.A.; Dunn, W.L.; Frank, R.A.

    1984-01-01

    A methodology has been developed for estimation of the probabilities of turbine-generated missile damage to nuclear power plant structures and systems. Mathematical models of the missile generation, transport, and impact events have been developed and sequenced to form an integrated turbine missile simulation methodology. Probabilistic Monte Carlo techniques are used to estimate the plant impact and damage probabilities. The methodology has been coded in the TURMIS computer code to facilitate numerical analysis and plant-specific turbine missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and probabilities have been estimated for a hypothetical nuclear power plant case study. (orig.)

  11. Trend analysis of modern high-rise construction

    Science.gov (United States)

    Radushinsky, Dmitry; Gubankov, Andrey; Mottaeva, Asiiat

    2018-03-01

    The article reviews the main trends of modern high-rise construction considered a number of architectural, engineering and technological, economic and image factors that have influenced the intensification of construction of high-rise buildings in the 21st century. The key factors of modern high-rise construction are identified, which are associated with an attractive image component for businessmen and politicians, with the ability to translate current views on architecture and innovations in construction technologies and the lobbying of relevant structures, as well as the opportunity to serve as an effective driver in the development of a complex of national economy sectors with the achievement of a multiplicative effect. The estimation of the priority nature of participation of foreign architectural bureaus in the design of super-high buildings in Russia at the present stage is given. The issue of economic expediency of construction of high-rise buildings, including those with only a residential function, has been investigated. The connection between the construction of skyscrapers as an important component of the image of cities in the marketing of places and territories, the connection of the availability of a high-rise center, the City, with the possibilities of attracting a "creative class" and the features of creating a large working space for specialists on the basis of territorial proximity and density of high-rise buildings.

  12. Plume rise measurements at Turbigo

    Energy Technology Data Exchange (ETDEWEB)

    Anfossi, D

    1982-01-01

    This paper presents analyses of plume measurements obtained during that campaign by the ENEL ground-based Lidar. The five stacks of Turbigo Power Plant have different heights and emission parameters and their plumes usually combine, so a model for multiple sources was used to predict the plume rises. These predictions are compared with the observations. Measurements of sigma/sub v/ and sigma/sub z/ over the first 1000 m are compared with the curves derived from other observations in the Po Valley, using the no-lift balloon technique over the same range of downwind distance. Skewness and kurtosis distributions are shown, both along the vertical and the horizontal directions. In order to show the plume structure in more detail, we present two examples of Lidar-derived cross sections and the corresponding vertically and horizontally integrated concentration profiles.

  13. Superphenix set to rise again

    International Nuclear Information System (INIS)

    Dorozynski, A.

    1993-01-01

    Superphenix, France's seemingly jinxed fast breeder reactor, which has not produced a single kilowatt of energy in more than 3 years, looks set to rise up next year like the mythical bird it is named after. The $5 billion reactor, the largest fast breeder in the world, has just been given the seal of approval by a public commission ordered by the government to look at the pros and cons of restarting. It still has hoops to jump through: a safety check and approval from the ministries of industries and environment. But the consortium of French, Italian, and German power utilities that run the plant are confident they can get it running by next summer. The Superphenix that rises out of the ashes will, however, be a different species of bird from the one planned 20 years ago. The consortium plans to turn the reactor into a debreeder, one that will incinerate more plutonium than it produces and so eat into Europe's plutonium stockpile. Calculations by Superphenix staff and the Atomic Energy Commission indicate that a plutonivorous fast breeder could incinerate 15 to 25 kilograms of plutonium while producing 1 billion kilowatt-hours of electricity-scarcely enough to make a dent in the tonnes of plutonium produced by Electricite de France's reactors each year. The Superphenix consortium is anxious to get the reactor back on line. The annual cost of upkeep and repair of the idle plant and salaries for its 700 staff may reach $140 million this year, 20% more than if the plant was running normally. If restarted, the existing core and a second one ready on the shelf will generate electricity worth $1.3 billion

  14. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  15. Geotechnical site assessment methodology

    International Nuclear Information System (INIS)

    Tunbridge, L.W.; Richards, L.R.

    1985-09-01

    The reports comprising this volume concern the research conducted on geotechnical site assessment methodology at the Carwynnen test mine in granites in Cornwall, with particular reference to the effect of structures imposed by discontinuities on the engineering behaviour of rock masses. The topics covered are: in-situ stress measurements using (a) the hydraulic fracturing method, or (b) the US Bureau of Mines deformation probe; scanline discontinuity survey - coding form and instructions, and data; applicability of geostatistical estimation methods to scalar rock properties; comments on in-situ stress at the Carwynnen test mine and the state of stress in the British Isles. (U.K.)

  16. Methodology of site studies

    International Nuclear Information System (INIS)

    Caries, J.C.; Hugon, J.; Grauby, A.

    1980-01-01

    This methodology consists in an essentially dynamic, estimated and follow-up analysis of the impact of discharges on all the environment compartments, whether natural or not, that play a part in the protection of man and his environment. It applies at two levels, to wit: the choice of site, or the detailed study of the site selected. Two examples of its application will be developed, namely: at the choice of site level in the case of marine sites, and of the detailed study level of the chosen site in that of a riverside site [fr

  17. Probabilistic reanalysis of twentieth-century sea-level rise.

    Science.gov (United States)

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2015-01-22

    Estimating and accounting for twentieth-century global mean sea level (GMSL) rise is critical to characterizing current and future human-induced sea-level change. Several previous analyses of tide gauge records--employing different methods to accommodate the spatial sparsity and temporal incompleteness of the data and to constrain the geometry of long-term sea-level change--have concluded that GMSL rose over the twentieth century at a mean rate of 1.6 to 1.9 millimetres per year. Efforts to account for this rate by summing estimates of individual contributions from glacier and ice-sheet mass loss, ocean thermal expansion, and changes in land water storage fall significantly short in the period before 1990. The failure to close the budget of GMSL during this period has led to suggestions that several contributions may have been systematically underestimated. However, the extent to which the limitations of tide gauge analyses have affected estimates of the GMSL rate of change is unclear. Here we revisit estimates of twentieth-century GMSL rise using probabilistic techniques and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval). Based on individual contributions tabulated in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, this estimate closes the twentieth-century sea-level budget. Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records.The increase in rate relative to the 1901-90 trend is accordingly larger than previously thought; this revision may affect some projections of future sea-level rise.

  18. Rising prices squeeze gas marketer

    Energy Technology Data Exchange (ETDEWEB)

    Lunan, D.

    2000-06-19

    Apollo Gas, a Toronto-based gas marketer, is considering options to enhance unit holder value, including sale of its 21,000 gas supply contracts, just weeks after it was forced out of the Alberta market by rising gas prices. Although the company had reported first quarter revenues of more than $15 million and earnings through that period of about $2.1 million, increases of 33 per cent and 38 per cent respectively over the same period in 1999, the company is resigned to the fact that such performance markers are not likely to be reached again in the foreseeable future, hence the decision to sell. About 95 per cent of Apollo's current transportation service volumes are matched to existing fixed-price supply contract which are due to expire in November 2000. After that, it is about 75 per cent matched for the balance of the term of its customer contracts (mostly five years). This means that the company is exposed to market prices that are likely to continue to increase. If this prediction holds true, Apollo would be forced to purchase the unhedged volumes of gas it needs to service its customers in the spot market at prices higher than prices the company is charging to its customers.

  19. Rising prices squeeze gas marketer

    International Nuclear Information System (INIS)

    Lunan, D.

    2000-01-01

    Apollo Gas, a Toronto-based gas marketer, is considering options to enhance unit holder value, including sale of its 21,000 gas supply contracts, just weeks after it was forced out of the Alberta market by rising gas prices. Although the company had reported first quarter revenues of more than $15 million and earnings through that period of about $2.1 million, increases of 33 per cent and 38 per cent respectively over the same period in 1999, the company is resigned to the fact that such performance markers are not likely to be reached again in the foreseeable future, hence the decision to sell. About 95 per cent of Apollo's current transportation service volumes are matched to existing fixed-price supply contract which are due to expire in November 2000. After that, it is about 75 per cent matched for the balance of the term of its customer contracts (mostly five years). This means that the company is exposed to market prices that are likely to continue to increase. If this prediction holds true, Apollo would be forced to purchase the unhedged volumes of gas it needs to service its customers in the spot market at prices higher than prices the company is charging to its customers

  20. Methods of erection of high-rise buildings

    Directory of Open Access Journals (Sweden)

    Cherednichenko Nadezhda

    2018-01-01

    Full Text Available The article contains the factors determining the choice of methods for organizing the construction and production of construction and installation work for the construction of high-rise buildings. There are also indicated specific features of their underground parts, characterized by powerful slab-pile foundations, large volumes of earthworks, reinforced bases and foundations for assembly cranes. The work cycle is considered when using reinforced concrete, steel and combined skeletons of high-rise buildings; the areas of application of flow, separate and complex methods are being disclosed. The main conditions for the erection of high-rise buildings and their components are singled out: the choice of formwork systems, delivery and lifting of concrete mixes, installation of reinforcement, the formation of lifting and transporting and auxiliary equipment. The article prescribes the reserves of reduction in the duration of construction due to the creation of: complex mechanized technologies for the efficient construction of foundations in various soil conditions, including in the heaving, swelling, hindered, subsidence, bulk, water-saturated forms; complex mechanized technologies for the erection of monolithic reinforced concrete structures, taking into account the winter conditions of production and the use of mobile concrete-laying complexes and new generation machines; modular formwork systems, distinguished by their versatility, ease, simplicity in operation suitable for complex high-rise construction; more perfect methodology and the development of a set of progressive organizational and technological solutions that ensure a rational relationship between the processes of production and their maximum overlap in time and space.

  1. Methods of erection of high-rise buildings

    Science.gov (United States)

    Cherednichenko, Nadezhda; Oleinik, Pavel

    2018-03-01

    The article contains the factors determining the choice of methods for organizing the construction and production of construction and installation work for the construction of high-rise buildings. There are also indicated specific features of their underground parts, characterized by powerful slab-pile foundations, large volumes of earthworks, reinforced bases and foundations for assembly cranes. The work cycle is considered when using reinforced concrete, steel and combined skeletons of high-rise buildings; the areas of application of flow, separate and complex methods are being disclosed. The main conditions for the erection of high-rise buildings and their components are singled out: the choice of formwork systems, delivery and lifting of concrete mixes, installation of reinforcement, the formation of lifting and transporting and auxiliary equipment. The article prescribes the reserves of reduction in the duration of construction due to the creation of: complex mechanized technologies for the efficient construction of foundations in various soil conditions, including in the heaving, swelling, hindered, subsidence, bulk, water-saturated forms; complex mechanized technologies for the erection of monolithic reinforced concrete structures, taking into account the winter conditions of production and the use of mobile concrete-laying complexes and new generation machines; modular formwork systems, distinguished by their versatility, ease, simplicity in operation suitable for complex high-rise construction; more perfect methodology and the development of a set of progressive organizational and technological solutions that ensure a rational relationship between the processes of production and their maximum overlap in time and space.

  2. The Rise of native advertising

    OpenAIRE

    Marius MANIC

    2015-01-01

    Native advertising is described both as a new way for promoters to engage audiences and as a new, clever, source of revenue for publishers and media agencies. The debates around its morality and the need for a wide accepted framework are often viewed as calls for creativity. Aside from the various forms, strategies and the need for clarification, the fact that native advertising works and its revenue estimates increase annually transforms the new type of ad into a clear ob...

  3. THE RISE TIME OF NORMAL AND SUBLUMINOUS TYPE Ia SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Gaitan, S.; Perrett, K.; Carlberg, R. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. george Street, Toronto, ON M5S 3H4 (Canada); Conley, A. [Center for Astrophysics and Space Astronomy, University of Colorado, 593 UCB, Boulder, CO 80309-0593 (United States); Bianco, F. B.; Howell, D. A.; Graham, M. L. [Department of Physics, University of California, Santa Barbara, Broida Hall, Mail Code 9530, Santa Barbara, CA 93106-9530 (United States); Sullivan, M.; Hook, I. M. [Department of Physics (Astrophysics), University of Oxford, DWB, Keble Road, Oxford, OX1 3RH (United Kingdom); Astier, P.; Balland, C.; Fourmanoit, N.; Guy, J.; Hardin, D.; Pain, R. [LPNHE, Universite Pierre et Marie Curie Paris 6, Universite Paris Diderot Paris 7, CNRS-IN2P3, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Balam, D. [Dominion Astrophysical Observatory, Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Basa, S. [Laboratoire d' Astrophysique de Marseille, Pole de l' Etoile Site de Chateau-Gombert, 38, rue Frederic Joliot-Curie, 13388 Marseille cedex 13 (France); Fouchez, D. [CPPM, CNRS-IN2P3 and University Aix Marseille II, Case 907, 13288 Marseille cedex 9 (France); Lidman, C. [Australian Astronomical Observatory, P.O. Box 296, Epping, NSW 1710 (Australia); Palanque-Delabrouille, N., E-mail: gonzalez@astro.utoronto.ca [DSM/IRFU/SPP, CEA-Saclay, F-91191 Gif-sur-Yvette (France); and others

    2012-01-20

    We calculate the average stretch-corrected rise time of Type Ia supernovae (SNe Ia) in the Supernova Legacy Survey. We use the aggregate light curves of spectroscopic and photometrically identified SNe Ia to fit the rising part of the light curve with a simple quadratic model. We obtain a light curve shape corrected, i.e., stretch-corrected, fiducial rise time of 17.02{sup +0.18}{sub -0.28} (stat) days. The measured rise time differs from an earlier finding by the SNLS (Conley et al.) due to the use of different SN Ia templates. We compare it to nearby samples using the same methods and find no evolution in the early part of the light curve of SNe Ia up to z = 1. We search for variations among different populations, particularly subluminous objects, by dividing the sample in stretch. Bright and slow decliners (s > 1.0) have consistent stretch-corrected rise times compared to fainter and faster decliners (0.8 < s {<=} 1.0); they are shorter by 0.57{sup +0.47}{sub -0.50} (stat) days. Subluminous SNe Ia (here defined as objects with s {<=} 0.8), although less constrained, are also consistent, with a rise time of 18.03{sup +0.81}{sub -1.37} (stat) days. We study several systematic biases and find that the use of different fiducial templates may affect the average rise time but not the intrinsic differences between populations. Based on our results, we estimate that subluminous SNe Ia are powered by 0.05-0.35 M{sub Sun} of {sup 56}Ni synthesized in the explosion. Our conclusions are the same for the single-stretch and two-stretch parameterizations of the light curve.

  4. The rise of colliding beams

    International Nuclear Information System (INIS)

    Richter, B.

    1992-06-01

    It is a particular pleasure for me to have this opportunity to review for you the rise of colliding beams as the standard technology for high-energy-physics accelerators. My own career in science has been intimately tied up in the transition from the old fixed-target technique to colliding-beam work. I have led a kind of double life both as a machine builder and as an experimenter, taking part in building and using the first of the colliding-beam machines, the Princeton-Stanford Electron-Electron Collider, and building the most recent advance in the technology, the Stanford Linear Collider. The beginning was in 1958, and in the 34 years since there has been a succession of both electron and proton colliders that have increased the available center-of-mass energy for hard collisions by more than a factor of 1000. For the historians here, I regret to say that very little of this story can be found in the conventional literature. Standard operating procedure for the accelerator physics community has been publication in conference proceedings, which can be obtained with some difficulty, but even more of the critical papers are in internal laboratory reports that were circulated informally and that may not even have been preserved. In this presentation I shall review what happened based on my personal experiences and what literature is available. I can speak from considerable experience on the electron colliders, for that is the topic in which I was most intimately involved. On proton colliders my perspective is more than of an observer than of a participant, but I have dug into the literature and have been close to many of the participants

  5. Improvement of the bubble rise velocity model in the pressurizer using ALMOD 3 computer code to calculate evaporation

    International Nuclear Information System (INIS)

    Madeira, A.A.

    1985-01-01

    It's studied the improvement for the calculation of bubble rise velocity, by adding two different ways to estimate this velocity, one of which more adequate to pressures normally found in the Reactor Cooling System. Additionally, a limitation in bubble rise velocity growth was imposed, to account for the actual behavior of bubble rise in two-phase mixtures. (Author) [pt

  6. Comparação de diferentes metodologias para estimativa de curvas intensidade-duração-freqüência para Pelotas - RS Comparison of different methodologies to estimate intensity-duration-frequency curves for Pelotas - RS, Brazil

    Directory of Open Access Journals (Sweden)

    Rita de C. F. Damé

    2008-06-01

    Full Text Available Nos projetos agrícolas de obras hidráulicas, onde não se dispõe de dados observados de vazão, é necessário explorar ao máximo as informações relativas às curvas Intensidade-Duração-Freqüência (IDF. Diante disso, é preciso obter maneira de desenvolver metodologias de estimativas de curvas IDF, em locais que possuam pouco ou nenhum dado pluviográfico. O objetivo do trabalho foi comparar as metodologias de desagregação de precipitações diárias para verificar o ganho de informação em termos de curvas IDF, comparadas àquela obtida a partir de dados observados (histórica. Os métodos utilizados foram: (a Método das Relações (CETESB, 1979; (b BELTRAME et al. (1991; (c ROBAINA & PEITER (1992; (d Modelo Bartlett-Lewis do Pulso Retangular Modificado (DAMÉ, 2001. Utilizou-se de série de dados de precipitação diária de Pelotas - RS, referente ao período de 1982-1998. Para estimar as curvas IDF, a partir dos registros históricos, foram estabelecidas as durações de 15; 30; 60; 360; 720 e 1.440 minutos, e os períodos de retorno de 2; 5 e 10 anos. Os valores de intensidades máximas foram comparados entre si, pelo teste "t" de Student, para os coeficientes linear e angular, e pelo Erro Relativo Médio Quadrático. O método que melhor representou as intensidades máximas de precipitação, nos períodos de retorno de 2 e 10 anos, foi o Método das Relações (CETESB, 1979.Agricultural projects which deal with hydraulic projects and do not possess observed data on outflow need to explore at the most, information about the Intensity-Duration-Frequency (IDF curves. Thus, it is necessary to create ways to develop methodologies that estimate IDF curves for locations that have little or no pluviometric data. The aim of this work was to compare disaggregation methodologies for daily precipitation, to verify the increase in quality information considering the IDF curves, as compared to those originated from observed data

  7. The rise of moral cognition.

    Science.gov (United States)

    Greene, Joshua D

    2015-02-01

    The field of moral cognition has grown rapidly in recent years thanks in no small part to Cognition. Consistent with its interdisciplinary tradition, Cognition encouraged the growth of this field by supporting empirical research conducted by philosophers as well as research native to neighboring fields such as social psychology, evolutionary game theory, and behavioral economics. This research has been exceptionally diverse both in its content and methodology. I argue that this is because morality is unified at the functional level, but not at the cognitive level, much as vehicles are unified by shared function rather than shared mechanics. Research in moral cognition, then, has progressed by explaining the phenomena that we identify as "moral" (for high-level functional reasons) in terms of diverse cognitive components that are not specific to morality. In light of this, research on moral cognition may continue to flourish, not as the identification and characterization of distinctive moral processes, but as a testing ground for theories of high-level, integrative cognitive function. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The Rise of native advertising

    Directory of Open Access Journals (Sweden)

    Marius MANIC

    2015-06-01

    Full Text Available Native advertising is described both as a new way for promoters to engage audiences and as a new, clever, source of revenue for publishers and media agencies. The debates around its morality and the need for a wide accepted framework are often viewed as calls for creativity. Aside from the various forms, strategies and the need for clarification, the fact that native advertising works and its revenue estimates increase annually transforms the new type of ad into a clear objective for companies, marketers and publishers. Native advertising stopped being a buzzword and started being a marketing reality.

  9. State estimation for a hexapod robot

    CSIR Research Space (South Africa)

    Lubbe, Estelle

    2015-09-01

    Full Text Available This paper introduces a state estimation methodology for a hexapod robot that makes use of proprioceptive sensors and a kinematic model of the robot. The methodology focuses on providing reliable full pose state estimation for a commercially...

  10. Anthropogenic sea level rise and adaptation in the Yangtze estuary

    Science.gov (United States)

    Cheng, H.; Chen, J.; Chen, Z.; Ruan, R.; Xu, G.; Zeng, G.; Zhu, J.; Dai, Z.; Gu, S.; Zhang, X.; Wang, H.

    2016-02-01

    Sea level rise is a major projected threat of climate change. There are regional variations in sea level changes, depending on both naturally the tectonic subsidence, geomorphology, naturally changing river inputs and anthropogenic driven forces as artificial reservoir water impoundment within the watershed and urban land subsidence driven by ground water depletion in the river delta. Little is known on regional sea level fall in response to the channel erosion due to the sediment discharge decline by reservoir interception in the upstream watershed, and water level rise driven by anthropogenic measures as the land reclamation, deep waterway regulation and fresh water reservoir construction to the sea level change in estuaries. Changing coastal cities are situated in the delta regions expected to be threatened in various degrees. Shanghai belongs to those cities. Here we show that the anthropogenic driven sea level rise in the Yangtze estuary from the point of view of the continuous hydrodynamic system consisted of river catchment, estuary and coastal sea. Land subsidence is cited as 4 mm/a (2011-2030). Scour depth of the estuarine channel by upstream engineering as Three Gauge Dam is estimated at 2-10 cm (2011-2030). The rise of water level by deep waterway and land reclamation is estimated at 8-10 cm (2011-2030). The relative sea level rise will be speculated about 10 -16 cm (2011-2030), which these anthropogenic sea level changes will be imposed into the absolute sea level rise 2 mm/a and tectonic subsidence 1 mm/a measured in 1990s. The action guideline to the sea level rise strategy in the Shanghai city have been proposed to the Shanghai government as (1) recent actions (2012-2015) to upgrade the city water supply and drainage engineering and protective engineering; (2) interim actions (2016-2020) to improve sea level monitoring and early warning system, and then the special, city, regional planning considering sea level rise; (3) long term actions (2021

  11. Food availability and the rising obesity prevalence in Malaysia

    OpenAIRE

    Geok-Lin Khor

    2012-01-01

    It is estimated that more than 1.1 billion adultsand 115 million children worldwide are overweight.In Malaysia, the second and third National Healthand Morbidity Surveys in 1996 and 2006 respectivelyreported a three-fold increase in obesity prevalenceamong adults, surging from 4.4% to 14% over the10-year period. Evidence of rising childhood obesityhas also emerged. The aim of this article is to gatherevidence from food availability data for an insightinto population shifts in dietary patterns...

  12. Project risk management in the construction of high-rise buildings

    Science.gov (United States)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  13. Sea-level-rise trends off the Indian coasts during the last two decades

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.; Nidheesh, A.G.; Lengaigne, M.

    The present communication discusses sea-level-rise trends in the north Indian Ocean, particularly off the Indian coasts, based on estimates derived from satellite altimeter and tide-gauge data. Altimeter data analysis over the 1993–2012 period...

  14. Clinical trial methodology

    National Research Council Canada - National Science Library

    Peace, Karl E; Chen, Ding-Geng

    2011-01-01

    ... in the pharmaceutical industry, Clinical trial methodology emphasizes the importance of statistical thinking in clinical research and presents the methodology as a key component of clinical research...

  15. Regional approaches in high-rise construction

    Science.gov (United States)

    Iconopisceva, O. G.; Proskurin, G. A.

    2018-03-01

    The evolutionary process of high-rise construction is in the article focus. The aim of the study was to create a retrospective matrix reflecting the tasks of the study such as: structuring the most iconic high-rise objects within historic boundaries. The study is based on contemporary experience of high-rise construction in different countries. The main directions and regional specifics in the field of high-rise construction as well as factors influencing the further evolution process are analyzed. The main changes in architectural stylistics, form-building, constructive solutions that focus on the principles of energy efficiency and bio positivity of "sustainable buildings", as well as the search for a new typology are noted. The most universal constructive methods and solutions that turned out to be particularly popular are generalized. The new typology of high-rises and individual approach to urban context are noted. The results of the study as a graphical scheme made it possible to represent the whole high-rise evolution. The new spatial forms of high-rises lead them to new role within the urban environments. Futuristic hyperscalable concepts take the autonomous urban space functions itself and demonstrate us how high-rises can replace multifunctional urban fabric, developing it inside their shells.

  16. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  17. Assessing Flood Risk Under Sea Level Rise and Extreme Sea Levels Scenarios: Application to the Ebro Delta (Spain)

    Science.gov (United States)

    Sayol, J. M.; Marcos, M.

    2018-02-01

    This study presents a novel methodology to estimate the impact of local sea level rise and extreme surges and waves in coastal areas under climate change scenarios. The methodology is applied to the Ebro Delta, a valuable and vulnerable low-lying wetland located in the northwestern Mediterranean Sea. Projections of local sea level accounting for all contributions to mean sea level changes, including thermal expansion, dynamic changes, fresh water addition and glacial isostatic adjustment, have been obtained from regionalized sea level projections during the 21st century. Particular attention has been paid to the uncertainties, which have been derived from the spread of the multi-model ensemble combined with seasonal/inter-annual sea level variability from local tide gauge observations. Besides vertical land movements have also been integrated to estimate local relative sea level rise. On the other hand, regional projections over the Mediterranean basin of storm surges and wind-waves have been used to evaluate changes in extreme events. The compound effects of surges and extreme waves have been quantified using their joint probability distributions. Finally, offshore sea level projections from extreme events superimposed to mean sea level have been propagated onto a high resolution digital elevation model of the study region in order to construct flood hazards maps for mid and end of the 21st century and under two different climate change scenarios. The effect of each contribution has been evaluated in terms of percentage of the area exposed to coastal hazards, which will help to design more efficient protection and adaptation measures.

  18. An LWR design decision Methodology

    International Nuclear Information System (INIS)

    Leahy, T.J.; Rees, D.C.; Young, J.

    1982-01-01

    While all parties involved in nuclear plant regulation endeavor to make decisions which optimize the considerations of plant safety and financial impacts, these decisions are generally made without the benefit of a systematic and rigorous approach to the questions confronting the decision makers. A Design Decision Methodology has been developed which provides such a systematic approach. By employing this methodology, which makes use of currently accepted probabilistic risk assessment techniques and cost estimation, informed decisions may be made against a background of comparisons between the relative levels of safety and costs associated with various design alternatives

  19. Generalized Response Surface Methodology : A New Metaheuristic

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    Generalized Response Surface Methodology (GRSM) is a novel general-purpose metaheuristic based on Box and Wilson.s Response Surface Methodology (RSM).Both GRSM and RSM estimate local gradients to search for the optimal solution.These gradients use local first-order polynomials.GRSM, however, uses

  20. Uncertainty analysis for results of thermal hydraulic codes of best-estimate-type

    International Nuclear Information System (INIS)

    Alva N, J.

    2010-01-01

    In this thesis, some fundamental knowledge is presented about uncertainty analysis and about diverse methodologies applied in the study of nuclear power plant transient event analysis, particularly related to thermal hydraulics phenomena. These concepts and methodologies mentioned in this work come from a wide bibliographical research in the nuclear power subject. Methodologies for uncertainty analysis have been developed by quite diverse institutions, and they have been widely used worldwide for application to results from best-estimate-type computer codes in nuclear reactor thermal hydraulics and safety analysis. Also, the main uncertainty sources, types of uncertainties, and aspects related to best estimate modeling and methods are introduced. Once the main bases of uncertainty analysis have been set, and some of the known methodologies have been introduced, it is presented in detail the CSAU methodology, which will be applied in the analyses. The main objective of this thesis is to compare the results of an uncertainty and sensibility analysis by using the Response Surface Technique to the application of W ilks formula, apply through a loss coolant experiment and an event of rise in a BWR. Both techniques are options in the part of uncertainty and sensibility analysis of the CSAU methodology, which was developed for the analysis of transients and accidents at nuclear power plants, and it is the base of most of the methodologies used in licensing of nuclear power plants practically everywhere. Finally, the results of applying both techniques are compared and discussed. (Author)

  1. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  2. Rising tides, rising gates: The complex ecogeomorphic response of coastal wetlands to sea-level rise and human interventions

    Science.gov (United States)

    Sandi, Steven G.; Rodríguez, José F.; Saintilan, Neil; Riccardi, Gerardo; Saco, Patricia M.

    2018-04-01

    Coastal wetlands are vulnerable to submergence due to sea-level rise, as shown by predictions of up to 80% of global wetland loss by the end of the century. Coastal wetlands with mixed mangrove-saltmarsh vegetation are particularly vulnerable because sea-level rise can promote mangrove encroachment on saltmarsh, reducing overall wetland biodiversity. Here we use an ecogeomorphic framework that incorporates hydrodynamic effects, mangrove-saltmarsh dynamics, and soil accretion processes to assess the effects of control structures on wetland evolution. Migration and accretion patterns of mangrove and saltmarsh are heavily dependent on topography and control structures. We find that current management practices that incorporate a fixed gate for the control of mangrove encroachment are useful initially, but soon become ineffective due to sea-level rise. Raising the gate, to counteract the effects of sea level rise and promote suitable hydrodynamic conditions, excludes mangrove and maintains saltmarsh over the entire simulation period of 100 years

  3. The NLC Software Requirements Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Shoaee, Hamid

    2002-08-20

    We describe the software requirements and development methodology developed for the NLC control system. Given the longevity of that project, and the likely geographical distribution of the collaborating engineers, the planned requirements management process is somewhat more formal than the norm in high energy physics projects. The short term goals of the requirements process are to accurately estimate costs, to decompose the problem, and to determine likely technologies. The long term goal is to enable a smooth transition from high level functional requirements to specific subsystem and component requirements for individual programmers, and to support distributed development. The methodology covers both ends of that life cycle. It covers both the analytical and documentary tools for software engineering, and project management support. This paper introduces the methodology, which is fully described in [1].

  4. Modular High Voltage Pulse Converter for Short Rise and Decay Times

    NARCIS (Netherlands)

    Mao, S.

    2018-01-01

    This thesis explores a modular HV pulse converter technology with short rise and decay times. A systematic methodology to derive and classify HV architectures based on a modularization level of power building blocks of the HV pulse converter is developed to summarize existing architectures and

  5. Rural demographic change, rising wages and the restructuring of Chinese agriculture

    DEFF Research Database (Denmark)

    Li, Tianxiang; Yu, Wusheng; Baležentis, Tomas

    2017-01-01

    Purpose The purpose of this paper is to identify the effects of recent demographic transition and rising labor costs on agricultural production structure and pattern in China during 1998-2012. Design/methodology/approach The authors, first, theoretically discuss the effects of changing relative...

  6. Covariance Evaluation Methodology for Neutron Cross Sections

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  7. Spatial Development Modeling Methodology Application Possibilities in Vilnius

    Directory of Open Access Journals (Sweden)

    Lina Panavaitė

    2017-05-01

    Full Text Available In order to control the continued development of high-rise buildings and their irreversible visual impact on the overall silhouette of the city, the great cities of the world introduced new methodological principles to city’s spatial development models. These methodologies and spatial planning guidelines are focused not only on the controlled development of high-rise buildings, but on the spatial modelling of the whole city by defining main development criteria and estimating possible consequences. Vilnius city is no exception, however the re-establishment of independence of Lithuania caused uncontrolled urbanization process, so most of the city development regulations emerged as a consequence of unmanaged processes of investors’ expectations legalization. The importance of consistent urban fabric as well as conservation and representation of city’s most important objects gained attention only when an actual threat of overshadowing them with new architecture along with unmanaged urbanization in the city center or urban sprawl at suburbia, caused by land-use projects, had emerged. Current Vilnius’ spatial planning documents clearly define urban structure and key development principles, however the definitions are relatively abstract, causing uniform building coverage requirements for territories with distinct qualities and simplifying planar designs which do not meet quality standards. The overall quality of urban architecture is not regulated. The article deals with current spatial modeling methods, their individual parts, principles, the criteria for quality assessment and their applicability in Vilnius. The text contains an outline of possible building coverage regulations and impact assessment criteria for new development. The article contains a compendium of requirements for high-quality spatial planning and building design.

  8. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    Science.gov (United States)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  9. Climate Adaptation and Sea Level Rise

    Science.gov (United States)

    EPA supports the development and maintenance of water utility infrastructure across the country. Included in this effort is helping the nation’s water utilities anticipate, plan for, and adapt to risks from flooding, sea level rise, and storm surge.

  10. Interconnect rise time in superconducting integrating circuits

    International Nuclear Information System (INIS)

    Preis, D.; Shlager, K.

    1988-01-01

    The influence of resistive losses on the voltage rise time of an integrated-circuit interconnection is reported. A distribution-circuit model is used to present the interconnect. Numerous parametric curves are presented based on numerical evaluation of the exact analytical expression for the model's transient response. For the superconducting case in which the series resistance of the interconnect approaches zero, the step-response rise time is longer but signal strength increases significantly

  11. Committed sea-level rise under the Paris Agreement and the legacy of delayed mitigation action.

    Science.gov (United States)

    Mengel, Matthias; Nauels, Alexander; Rogelj, Joeri; Schleussner, Carl-Friedrich

    2018-02-20

    Sea-level rise is a major consequence of climate change that will continue long after emissions of greenhouse gases have stopped. The 2015 Paris Agreement aims at reducing climate-related risks by reducing greenhouse gas emissions to net zero and limiting global-mean temperature increase. Here we quantify the effect of these constraints on global sea-level rise until 2300, including Antarctic ice-sheet instabilities. We estimate median sea-level rise between 0.7 and 1.2 m, if net-zero greenhouse gas emissions are sustained until 2300, varying with the pathway of emissions during this century. Temperature stabilization below 2 °C is insufficient to hold median sea-level rise until 2300 below 1.5 m. We find that each 5-year delay in near-term peaking of CO 2 emissions increases median year 2300 sea-level rise estimates by ca. 0.2 m, and extreme sea-level rise estimates at the 95th percentile by up to 1 m. Our results underline the importance of near-term mitigation action for limiting long-term sea-level rise risks.

  12. HiRISE: The People's Camera

    Science.gov (United States)

    McEwen, A. S.; Eliason, E.; Gulick, V. C.; Spinoza, Y.; Beyer, R. A.; HiRISE Team

    2010-12-01

    The High Resolution Imaging Science Experiment (HiRISE) camera, orbiting Mars since 2006 on the Mars Reconnaissance Orbiter (MRO), has returned more than 17,000 large images with scales as small as 25 cm/pixel. From it’s beginning, the HiRISE team has followed “The People’s Camera” concept, with rapid release of useful images, explanations, and tools, and facilitating public image suggestions. The camera includes 14 CCDs, each read out into 2 data channels, so compressed images are returned from MRO as 28 long (up to 120,000 line) images that are 1024 pixels wide (or binned 2x2 to 512 pixels, etc.). This raw data is very difficult to use, especially for the public. At the HiRISE operations center the raw data are calibrated and processed into a series of B&W and color products, including browse images and JPEG2000-compressed images and tools to make it easy for everyone to explore these enormous images (see http://hirise.lpl.arizona.edu/). Automated pipelines do all of this processing, so we can keep up with the high data rate; images go directly to the format of the Planetary Data System (PDS). After students visually check each image product for errors, they are fully released just 1 month after receipt; captioned images (written by science team members) may be released sooner. These processed HiRISE images have been incorporated into tools such as Google Mars and World Wide Telescope for even greater accessibility. 51 Digital Terrain Models derived from HiRISE stereo pairs have been released, resulting in some spectacular flyover movies produced by members of the public and viewed up to 50,000 times according to YouTube. Public targeting began in 2007 via NASA Quest (http://marsoweb.nas.nasa.gov/HiRISE/quest/) and more than 200 images have been acquired, mostly by students and educators. At the beginning of 2010 we released HiWish (http://www.uahirise.org/hiwish/), opening HiRISE targeting to anyone in the world with Internet access, and already more

  13. The methodological convention 2,0 for the estimation of environmental costs. An economic evaluation of environmental damages; Methodenkonvention 2.0 zur Schaetzung von Umweltkosten. Oekonomische Bewertung von Umweltschaeden

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-08-15

    The reliable estimation of environmental damage costs requires a high degree of transparency of the objectives, assumptions and methods of assessment in order to ensure a correct classification and comparability of the cost factors. The methods convention under consideration aims to develop uniform standards for the technical evaluation of environmental costs and to improve the transparency of the estimates.

  14. A concise methodology for the estimation of elemental concentration effects on mesoscale cohesion of non-ferrous covalent glasses: The case of Se(80−xGe(20−xInx=0,5,10,15

    Directory of Open Access Journals (Sweden)

    Georgios S.E. Antipas

    2015-09-01

    Full Text Available The link between the electronic state and the mesoscale of covalent glasses is not settled. A functional means of addressing the mesoscale is via generalizing glass properties (e.g. such as cohesion on the basis of atomic clusters. Derivation of the most representative such cluster formations is not streamlined, however. Here, numerical pair correlation and ab initio energetic datasets are presented for the case of amorphous Selenium-rich covalent glasses, which were obtained via a new, concise methodology, relating mesoscopic cohesion to local atomic order and to the system׳s electronic structure. The methodology consisted of selecting clusters on the basis of the variation of atomic environment statistics of total coordination, partial coordination by the matrix element and cluster number density along the radial direction of a Reverse Monte Carlo supercell, the latter attained by fitting total scattering data.

  15. Model of investment appraisal of high-rise construction with account of cost of land resources

    Science.gov (United States)

    Okolelova, Ella; Shibaeva, Marina; Trukhina, Natalya

    2018-03-01

    The article considers problems and potential of high-rise construction as a global urbanization. The results of theoretical and practical studies on the appraisal of investments in high-rise construction are provided. High-rise construction has a number of apparent upsides in modern terms of development of megapolises and primarily it is economically efficient. Amid serious lack of construction sites, skyscrapers successfully deal with the need of manufacturing, office and living premises. Nevertheless, there are plenty issues, which are related with high-rise construction, and only thorough scrutiny of them allow to estimate the real economic efficiency of this branch. The article focuses on the question of economic efficiency of high-rise construction. The suggested model allows adjusting the parameters of a facility under construction, setting the tone for market value as well as the coefficient for appreciation of the construction net cost, that depends on the number of storey's, in the form of function or discrete values.

  16. Future rise of the sea level: consequences and strategies on the shoreline

    International Nuclear Information System (INIS)

    Teisson, C.

    1991-11-01

    The Mean Sea Level may rise in a near future due to the warming of the atmosphere associated with the 'greenhouse effect'. The alarming estimations issued in the 1980's (several meters of surelevation in the next centuries) are now lowered: the ice sheets, the melting of which could induce such a rise, do not present signs of instability. A rise from 30 to 50 cm is likely to occur in the middle of the next century; there is a probability of 25% that the rise of sea level relative to the year 1980 stands beyond 1 meter by 2100. The consequences of such a rise on the shoreline and the maritime works are reviewed, and planning strategies are discussed. This study has been performed in the framework of a convention between EDF-LNH and the Sea State Secretary (Service Technique des Ports Maritimes et Voies Navigables) 41 refs., 31 figs., 6 tabs

  17. Socioecological Aspects of High-rise Construction

    Science.gov (United States)

    Eichner, Michael; Ivanova, Zinaida

    2018-03-01

    In this article, the authors consider the socioecological problems that arise in the construction and operation of high-rise buildings. They study different points of view on high-rise construction and note that the approaches to this problem are very different. They also analyse projects of modern architects and which attempts are made to overcome negative impacts on nature and mankind. The article contains materials of sociological research, confirming the ambivalent attitude of urban population to high-rise buildings. In addition, one of the author's sociological survey reveals the level of environmental preparedness of the university students, studying in the field of "Construction of unique buildings and structures", raising the question of how future specialists are ready to take into account socioecological problems. Conclusion of the authors: the construction of high-rise buildings is associated with huge social and environmental risks, negative impact on the biosphere and human health. This requires deepened skills about sustainable design methods and environmental friendly construction technologies of future specialists. Professor M. Eichner presents in the article his case study project results on implementation of holistic eco-sustainable construction principles for mixed-use high-rise building in the metropolis of Cairo.

  18. Strategic advantages of high-rise construction

    Directory of Open Access Journals (Sweden)

    Yaskova Natalya

    2018-01-01

    Full Text Available Traditional methods to assess the competitiveness of different types of real estate in the context of huge changes of new technological way of life don’t provide building solutions that would be correct from a strategic perspective. There are many challenges due to changes in the consumers’ behavior in the housing area. A multiplicity of life models, a variety of opportunities and priorities, traditions and new trends in construction should be assessed in terms of prospective benefits in the environment of the emerging new world order. At the same time, the mane discourse of high-rise construction mainly relates to its design features, technical innovations, and architectural accents. We need to clarify the criteria for economic evaluation of high-rise construction in order to provide decisions with clear and quantifiable contexts. The suggested approach to assessing the strategic advantage of high-rise construction and the prospects for capitalization of high-rise buildings poses new challenges for the economy to identify adequate quantitative assessment methods of the high-rise buildings economic efficiency, taking into account all stages of their life cycle.

  19. Socioecological Aspects of High-rise Construction

    Directory of Open Access Journals (Sweden)

    Eichner Michael

    2018-01-01

    Full Text Available In this article, the authors consider the socioecological problems that arise in the construction and operation of high-rise buildings. They study different points of view on high-rise construction and note that the approaches to this problem are very different. They also analyse projects of modern architects and which attempts are made to overcome negative impacts on nature and mankind. The article contains materials of sociological research, confirming the ambivalent attitude of urban population to high-rise buildings. In addition, one of the author’s sociological survey reveals the level of environmental preparedness of the university students, studying in the field of "Construction of unique buildings and structures", raising the question of how future specialists are ready to take into account socioecological problems. Conclusion of the authors: the construction of high-rise buildings is associated with huge social and environmental risks, negative impact on the biosphere and human health. This requires deepened skills about sustainable design methods and environmental friendly construction technologies of future specialists. Professor M. Eichner presents in the article his case study project results on implementation of holistic eco-sustainable construction principles for mixed-use high-rise building in the metropolis of Cairo.

  20. Strategic advantages of high-rise construction

    Science.gov (United States)

    Yaskova, Natalya

    2018-03-01

    Traditional methods to assess the competitiveness of different types of real estate in the context of huge changes of new technological way of life don't provide building solutions that would be correct from a strategic perspective. There are many challenges due to changes in the consumers' behavior in the housing area. A multiplicity of life models, a variety of opportunities and priorities, traditions and new trends in construction should be assessed in terms of prospective benefits in the environment of the emerging new world order. At the same time, the mane discourse of high-rise construction mainly relates to its design features, technical innovations, and architectural accents. We need to clarify the criteria for economic evaluation of high-rise construction in order to provide decisions with clear and quantifiable contexts. The suggested approach to assessing the strategic advantage of high-rise construction and the prospects for capitalization of high-rise buildings poses new challenges for the economy to identify adequate quantitative assessment methods of the high-rise buildings economic efficiency, taking into account all stages of their life cycle.

  1. Development of a model to simulate groundwater inundation induced by sea-level rise and high tides in Honolulu, Hawaii.

    Science.gov (United States)

    Habel, Shellie; Fletcher, Charles H; Rotzoll, Kolja; El-Kadi, Aly I

    2017-05-01

    Many of the world's largest cities face risk of sea-level rise (SLR) induced flooding owing to their limited elevations and proximities to the coastline. Within this century, global mean sea level is expected to reach magnitudes that will exceed the ground elevation of some built infrastructure. The concurrent rise of coastal groundwater will produce additional sources of inundation resulting from narrowing and loss of the vertical unsaturated subsurface space. This has implications for the dense network of buried and low-lying infrastructure that exists across urban coastal zones. Here, we describe a modeling approach that simulates narrowing of the unsaturated space and groundwater inundation (GWI) generated by SLR-induced lifting of coastal groundwater. The methodology combines terrain modeling, groundwater monitoring, estimation of tidal influence, and numerical groundwater-flow modeling to simulate future flood scenarios considering user-specified tide stages and magnitudes of SLR. We illustrate the value of the methodology by applying it to the heavily urbanized and low-lying Waikiki area of Honolulu, Hawaii. Results indicate that SLR of nearly 1 m generates GWI across 23% of the 13 km 2 study area, threatening $5 billion of taxable real estate and 48 km of roadway. Analysis of current conditions reveals that 86% of 259 active cesspool sites in the study area are likely inundated. This suggests that cesspool effluent is currently entering coastal groundwater, which not only leads to degradation of coastal environments, but also presents a future threat to public health as GWI would introduce effluent at the ground surface. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Distribution of flexural deflection in the worldwide outer rise area

    Science.gov (United States)

    Lin, Zi-Jun; Lin, Jing-Yi; Lin, Yi-Chin; Chin, Shao-Jinn; Chen, Yen-Fu

    2015-04-01

    The outer rise on the fringe of a subduction system is caused by an accreted load on the flexed oceanic lithosphere. The magnitude of the deflection is usually linked to the stress state beard by the oceanic plate. In a coupled subduction zone, the stress is abundantly accumulated across the plate boundary which should affect the flexural properties of the subducted plate. Thus, the variation of the outer rise in shape may reflect the seismogenic characteristics of the subduction system. In this study, we intent to find the correlation between the flexure deflection (Wb) of the outer rise and the subduction zone properties by comparing several slab parameters and the Wb distribution. The estimation of Wb is performed based on the available bathymetry data and the statistic analysis of earthquakes is from the global ISC earthquake catalog for the period of 1900-2015. Our result shows a progressive change of Wb in space, suggesting a robust calculation. The average Wb of worldwise subduction system spreads from 348 to 682 m. No visible distinction in the ranging of Wb was observed for different subduction zones. However, in a weak coupling subduction system, the standard variation of Wb has generally larger value. Relatively large Wb generally occurs in the center of the trench system, whereas small Wb for the two ends of trench. The comparison of Wb and several slab parameters shows that the Wb may be correlated with the maximal magnitude and the number of earthquakes. Otherwise, no clear relationship with other parameters can be obtained.

  3. Beam Induced Pressure Rise at RHIC

    CERN Document Server

    Zhang, S Y; Bai, Mei; Blaskiewicz, Michael; Cameron, Peter; Drees, Angelika; Fischer, Wolfram; Gullotta, Justin; He, Ping; Hseuh Hsiao Chaun; Huang, Haixin; Iriso, Ubaldo; Lee, Roger C; Litvinenko, Vladimir N; MacKay, William W; Nicoletti, Tony; Oerter, Brian; Peggs, Steve; Pilat, Fulvia Caterina; Ptitsyn, Vadim; Roser, Thomas; Satogata, Todd; Smart, Loralie; Snydstrup, Louis; Thieberger, Peter; Trbojevic, Dejan; Wang, Lanfa; Wei, Jie; Zeno, Keith

    2005-01-01

    Beam induced pressure rise in RHIC warm sections is currently one of the machine intensity and luminosity limits. This pressure rise is mainly due to electron cloud effects. The RHIC warm section electron cloud is associated with longer bunch spacings compared with other machines, and is distributed non-uniformly around the ring. In addition to the countermeasures for normal electron cloud, such as the NEG coated pipe, solenoids, beam scrubbing, bunch gaps, and larger bunch spacing, other studies and beam tests toward the understanding and counteracting RHIC warm electron cloud are of interest. These include the ion desorption studies and the test of anti-grazing ridges. For high bunch intensities and the shortest bunch spacings, pressure rises at certain locations in the cryogenic region have been observed during the past two runs. Beam studies are planned for the current 2005 run and the results will be reported.

  4. Rising Long-term Interest Rates

    DEFF Research Database (Denmark)

    Hallett, Andrew Hughes

    Rather than chronicle recent developments in European long-term interest rates as such, this paper assesses the impact of increases in those interest rates on economic performance and inflation. That puts us in a position to evaluate the economic pressures for further rises in those rates......, the first question posed in this assignment, and the scope for overshooting (the second question), and then make some illustrative predictions of future interest rates in the euro area. We find a wide range of effects from rising interest rates, mostly small and mostly negative, focused on investment...... till the emerging European recovery is on a firmer basis and capable of overcoming increases in the cost of borrowing and shrinking fiscal space. There is also an implication that worries about rising/overshooting interest rates often reflect the fact that inflation risks are unequally distributed...

  5. Integrating wildfire plume rises within atmospheric transport models

    Science.gov (United States)

    Mallia, D. V.; Kochanski, A.; Wu, D.; Urbanski, S. P.; Krueger, S. K.; Lin, J. C.

    2016-12-01

    Wildfires can generate significant pyro-convection that is responsible for releasing pollutants, greenhouse gases, and trace species into the free troposphere, which are then transported a significant distance downwind from the fire. Oftentimes, atmospheric transport and chemistry models have a difficult time resolving the transport of smoke from these wildfires, primarily due to deficiencies in estimating the plume injection height, which has been highlighted in previous work as the most important aspect of simulating wildfire plume transport. As a result of the uncertainties associated with modeled wildfire plume rise, researchers face difficulties modeling the impacts of wildfire smoke on air quality and constraining fire emissions using inverse modeling techniques. Currently, several plume rise parameterizations exist that are able to determine the injection height of fire emissions; however, the success of these parameterizations has been mixed. With the advent of WRF-SFIRE, the wildfire plume rise and injection height can now be explicitly calculated using a fire spread model (SFIRE) that is dynamically linked with the atmosphere simulated by WRF. However, this model has only been tested on a limited basis due to computational costs. Here, we will test the performance of WRF-SFIRE in addition to several commonly adopted plume parameterizations (Freitas, Sofiev, and Briggs) for the 2013 Patch Springs (Utah) and 2012 Baker Canyon (Washington) fires, for both of which observations of plume rise heights are available. These plume rise techniques will then be incorporated within a Lagrangian atmospheric transport model (STILT) in order to simulate CO and CO2 concentrations during NASA's CARVE Earth Science Airborne Program over Alaska during the summer of 2012. Initial model results showed that STILT model simulations were unable to reproduce enhanced CO concentrations produced by Alaskan fires observed during 2012. Near-surface concentrations were drastically

  6. Rising U.S. Earnings Inequality and Family Labor Supply: The Covariance Structure of Intrafamily Earnings

    OpenAIRE

    Dean R. Hyslop

    2001-01-01

    This paper studies the labor supply contributions to individual and family earnings inequality during the period of rising wage inequality in the early 1980's. Working couples have positively correlated labor market outcomes, which are almost entirely attributable to permanent factors. An intertemporal family labor supply model with this feature is used to estimate labor supply elasticities for husbands of 0.05, and wives of 0.40. This implies that labor supply explains little of the rising a...

  7. Spatial Hedonic Models for Measuring the Impact of Sea-Level Rise on Coastal Real Estate

    OpenAIRE

    Okmyung Bin; Ben Poulter; Christopher F. Dumas; John C. Whitehead

    2009-01-01

    This study uses a unique integration of geospatial and hedonic property data to estimate the impact of sea-level rise on coastal real estate in North Carolina. North Carolina’s coastal plain is one of several large terrestrial systems around the world threatened by rising sea-levels. High-resolution topographic LIDAR (Light Detection and Ranging) data are used to provide accurate inundation maps for all properties that will be at risk under six different sea-level rise scenarios. A simulation...

  8. Software engineering methodologies and tools

    Science.gov (United States)

    Wilcox, Lawrence M.

    1993-01-01

    Over the years many engineering disciplines have developed, including chemical, electronic, etc. Common to all engineering disciplines is the use of rigor, models, metrics, and predefined methodologies. Recently, a new engineering discipline has appeared on the scene, called software engineering. For over thirty years computer software has been developed and the track record has not been good. Software development projects often miss schedules, are over budget, do not give the user what is wanted, and produce defects. One estimate is there are one to three defects per 1000 lines of deployed code. More and more systems are requiring larger and more complex software for support. As this requirement grows, the software development problems grow exponentially. It is believed that software quality can be improved by applying engineering principles. Another compelling reason to bring the engineering disciplines to software development is productivity. It has been estimated that productivity of producing software has only increased one to two percent a year in the last thirty years. Ironically, the computer and its software have contributed significantly to the industry-wide productivity, but computer professionals have done a poor job of using the computer to do their job. Engineering disciplines and methodologies are now emerging supported by software tools that address the problems of software development. This paper addresses some of the current software engineering methodologies as a backdrop for the general evaluation of computer assisted software engineering (CASE) tools from actual installation of and experimentation with some specific tools.

  9. Future sea level rise constrained by observations and long-term commitment

    Science.gov (United States)

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-01-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  10. Future sea level rise constrained by observations and long-term commitment.

    Science.gov (United States)

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-08

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.

  11. Cities and Sea Level Rise: A Roadmap for Flood Hazard Adaptation

    Science.gov (United States)

    Horn, Diane; Cousins, Ann

    2016-04-01

    Coastal cities will face a range of increasingly severe challenges as sea level rises, and adaptation to future flood risk will require more than structural defences. Many cities will not be able to rely solely on engineering structures for protection and will need to develop a suite of policy responses to increase their resilience to impacts of rising sea level. The tools to promote flood risk adaptation are already within the capacity of most cities, with an assortment of policy tools available to address other land-use problems which can be refashioned and used to adapt to sea level rise. This study reviews approaches for urban adaptation through detailed analyses of case studies of cities which have developed flood adaptation strategies that combine structural defences with innovative approaches to living with flood risk. The aim of the overall project is to produce a 'roadmap' to guide practitioners through the process of analysing coastal flood risk in urban areas. Methodologies and tools to estimate vulnerability to coastal flooding, damages suffered, and the assessment of flood defences and adaptation measures are complemented with a discussion on the essential impact that local policy has on the treatment of coastal flooding and the constraints and opportunities that result from the specific country or locality characteristics in relation to economic, political, social and environmental priorities, which are likely to dictate the approach to coastal flooding and the actions proposed. Case studies of adaptation strategies used by Rotterdam, Bristol, Ho Chi Minh City and Norfolk, Virginia, are used to draw out a range of good practice elements that promote effective adaptation to sea level rise. These can be grouped into risk reduction, governance issues, and insurance, and can be used to provide examples of how other cities could adopt and implement flood adaptation strategies from a relatively limited starting position. Most cities will neither be able to

  12. PERSPECTIVE: The tripping points of sea level rise

    Science.gov (United States)

    Hecht, Alan D.

    2009-12-01

    When President Nixon created the US Environmental Protection Agency (EPA) in 1970 he said the environment must be perceived as a single, interrelated system. We are nowhere close to achieving this vision. Jim Titus and his colleagues [1] highlight one example of where one set of regulations or permits may be in conflict with another and where regulations were crafted in the absence of understanding the cumulative impact of global warming. The issue here is how to deal with the impacts of climate change on sea level and the latter's impact on wetland polices, clean water regulations, and ecosystem services. The Titus paper could also be called `The tripping points of sea level rise'. Titus and his colleagues have looked at the impact of such sea level rise on the east coast of the United States. Adaptive responses include costly large- scale investment in shore protection (e.g. dikes, sand replenishment) and/or ecosystem migration (retreat), where coastal ecosystems move inland. Shore protection is limited by available funds, while ecosystem migrations are limited by available land use. The driving factor is the high probability of sea level rise due to climate change. Estimating sea level rise is difficult because of local land and coastal dynamics including rising or falling land areas. It is estimated that sea level could rise between 8 inches and 2 feet by the end of this century [2]. The extensive data analysis done by Titus et al of current land use is important because, as they observe, `property owners and land use agencies have generally not decided how they will respond to sea level rise, nor have they prepared maps delineating where shore protection and retreat are likely'. This is the first of two `tripping points', namely the need for adaptive planning for a pending environmental challenge that will create economic and environment conflict among land owners, federal and state agencies, and businesses. One way to address this gap in adaptive management

  13. The Rise of the Digital Public Library

    Science.gov (United States)

    McKendrick, Joseph

    2012-01-01

    There is a growing shift to digital offerings among public libraries. Libraries increasingly are fulfilling roles as technology hubs for their communities, with high demand for technology and career development training resources. Ebooks and other digital materials are on the rise, while print is being scaled back. More libraries are turning to…

  14. Rise time spectroscopy in cadmium telluride detectors

    International Nuclear Information System (INIS)

    Scharager, Claude; Siffert, Paul; Carnet, Bernard; Le Meur, Roger.

    1980-11-01

    By a simultaneous analysis of rise time and pulse amplitude distributions of the signals issued from various cadmium telluride detectors, it is possible to obtain informations about surface and bulk trapping, field distribution within the detectors, as well as charge collection and transport properties. These investigations have been performed on both pure and chlorine doped and materials for various surfaces preparation conditions [fr

  15. How oxygen gave rise to eukaryotic sex

    NARCIS (Netherlands)

    Hörandl, Elvira; Speijer, Dave

    2018-01-01

    9years ago. The large amount of ROS coming from a bacterial endosymbiont gave rise to DNA damage and vast increases in host genome mutation rates. Eukaryogenesis and chromosome evolution represent adaptations to oxidative stress. The host, an archaeon, most probably already had repair mechanisms

  16. Rising Political Consciousness: Transformational Learning in Malaysia.

    Science.gov (United States)

    Kamis, Mazalan; Muhamad, Mazanah

    As part of a larger study (not discussed) ten educated Malaysian citizens were interviewed to find whether their rising political consciousness, over a ten year period (1988-1999), indicated that their transformation was influenced by their culture. The subjects were between 35-45 years old, married, with an average of four children. All were…

  17. Can income redistribution help changing rising inequality?

    NARCIS (Netherlands)

    Salverda, W.

    2014-01-01

    In this article compares the rise in inequality concerning net household incomes in a number of European countries and Canada, the USA and Australia. Two important factors are used to explain this worrying trend: a growing of unequal market incomes and/or a declining redistribution of income through

  18. Why does a spinning egg rise?

    Science.gov (United States)

    Cross, Rod

    2018-03-01

    Experimental and theoretical results are presented concerning the rise of a spinning egg. It was found that an egg rises quickly while it is sliding and then more slowly when it starts rolling. The angular momentum of the egg projected in the XZ plane changed in the same direction as the friction torque, as expected, by rotating away from the vertical Z axis. The latter result does not explain the rise. However, an even larger effect arises from the Y component of the angular momentum vector. As the egg rises, the egg rotates about the Y axis, an effect that is closely analogous to rotation of the egg about the Z axis. Both effects can be described in terms of precession about the respective axes. Steady precession about the Z axis arises from the normal reaction force in the Z direction, while precession about the Y axis arises from the friction force in the Y direction. Precession about the Z axis ceases if the normal reaction force decreases to zero, and precession about the Y axis ceases if the friction force decreases to zero.

  19. Sea level rise : A literature survey

    NARCIS (Netherlands)

    Oude Essink, G.H.P.

    1992-01-01

    In order to assess the impact of sea level rise on Water Management, it is useful to understand the mechanisrns that determine the level of the sea. In this study, a literature survey is executed to analyze these mechanisms. Climate plays a centra! role in these mechanisms, Climate mainly changes

  20. Tube temperature rise limits: Boiling considerations

    Energy Technology Data Exchange (ETDEWEB)

    Vanderwater, R.G.

    1952-03-26

    A revision of tube power limits based on boiling considerations was presented earlier. The limits were given on a basis of tube power versus header pressure. However, for convenience of operation, the limits have been converted from tube power to permissible water temperature rise. The permissible {triangle}t`s water are given in this document.

  1. The economic consequences of oil price rise

    International Nuclear Information System (INIS)

    Lescaroux, Francois

    2006-05-01

    The author discusses the possible consequences of oil barrel price rise. First, he discusses the main results of analysis's which have been performed for thirty years regarding the impact of oil price on economical activity. He proposes interpretations of these studies and of their conclusions, and tries to draw lessons regarding effects which can be expected from the recent evolutions of energy markets

  2. The Enigma of Mercury's Northern Rise

    Science.gov (United States)

    James, P. B.

    2018-05-01

    Various aspects of the "northern rise" make it hard to explain: Its composition and chronology don't stand out from its surroundings, it seems to have uplifted late, and it has a huge gravity anomaly. We'll discuss the possible formation mechanisms.

  3. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Directory of Open Access Journals (Sweden)

    Anisimov Vladimir

    2018-01-01

    Full Text Available In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  4. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Science.gov (United States)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  5. Sea-level rise: towards understanding local vulnerability

    Science.gov (United States)

    Rahmstorf, Stefan

    2012-06-01

    , experts are increasingly looking at its potential impacts on coasts to facilitate local adaptation planning. This is a more complex issue than one might think, because different stretches of coast can be affected in very different ways. First of all, the sea-level response to global warming will not be globally uniform, since factors like changes in ocean currents (Levermann et al 2005) and the changing gravitational pull of continental ice (Mitrovica et al 2001) affect the local rise. Secondly, superimposed on the climatic trend is natural variability in sea level, which regionally can be as large as the climatic signal on multi-decadal timescales. Over the past decades, sea level has dropped in sizable parts of the world ocean, although it has of course risen in global mean (IPCC 2007). Thirdly, local land uplift or subsidence affects the local sea-level change relative to the coast, both for natural reasons (post-glacial isostatic adjustment centred on regions that were covered by ice sheets during the last ice age) and artificial ones (e.g., extraction of water or oil as in the Gulf of Mexico). Finally, local vulnerability to sea-level rise depends on many factors. Two interesting new studies in this journal (Tebaldi et al 2012, Strauss et al 2012) make important steps towards understanding sea-level vulnerability along the coasts of the United States, with methods that could also be applied elsewhere. The first, by Strauss and colleagues, merges high-resolution topographic data and a newly available tidal model together with population and housing data in order to estimate what land area and population would be at risk given certain increments in sea level. The results are mapped and tabulated at county and city level. They reveal the 'hot spots' along the US coast where sea-level rise is of the highest concern because of large populations living near the high-tide line: New York City and Long Island; the New Jersey shore; the Norfolk, Virginia, area; near Charleston

  6. Improved USGS methodology for assessing continuous petroleum resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy A.

    2010-01-01

    This report presents an improved methodology for estimating volumes of continuous (unconventional) oil and gas resources within the United States and around the world. The methodology is based on previously developed U.S. Geological Survey methodologies that rely on well-scale production data. Improvements were made primarily to how the uncertainty about estimated ultimate recoveries is incorporated in the estimates. This is particularly important when assessing areas with sparse or no production data, because the new methodology allows better use of analog data from areas with significant discovery histories.

  7. Scenario development methodologies

    International Nuclear Information System (INIS)

    Eng, T.; Hudson, J.; Stephansson, O.

    1994-11-01

    In the period 1981-1994, SKB has studied several methodologies to systematize and visualize all the features, events and processes (FEPs) that can influence a repository for radioactive waste in the future. All the work performed is based on the terminology and basic findings in the joint SKI/SKB work on scenario development presented in the SKB Technical Report 89-35. The methodologies studied are a) Event tree analysis, b) Influence diagrams and c) Rock Engineering Systems (RES) matrices. Each one of the methodologies is explained in this report as well as examples of applications. One chapter is devoted to a comparison between the two most promising methodologies, namely: Influence diagrams and the RES methodology. In conclusion a combination of parts of the Influence diagram and the RES methodology is likely to be a promising approach. 26 refs

  8. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  9. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  10. Updating Maryland's sea-level rise projections

    Science.gov (United States)

    Boesch, Donald F.; Atkinson, Larry P.; Boicourt, William C.; Boon, John D.; Cahoon, Donald R.; Dalrymple, Robert A.; Ezer, Tal; Horton, Benjamin P.; Johnson, Zoe P.; Kopp, Robert E.; Li, Ming; Moss, Richard H.; Parris, Adam; Sommerfield, Christopher K.

    2013-01-01

    With its 3,100 miles of tidal shoreline and low-lying rural and urban lands, “The Free State” is one of the most vulnerable to sea-level rise. Historically, Marylanders have long had to contend with rising water levels along its Chesapeake Bay and Atlantic Ocean and coastal bay shores. Shorelines eroded and low-relief lands and islands, some previously inhabited, were inundated. Prior to the 20th century, this was largely due to the slow sinking of the land since Earth’s crust is still adjusting to the melting of large masses of ice following the last glacial period. Over the 20th century, however, the rate of rise of the average level of tidal waters with respect to land, or relative sea-level rise, has increased, at least partially as a result of global warming. Moreover, the scientific evidence is compelling that Earth’s climate will continue to warm and its oceans will rise even more rapidly. Recognizing the scientific consensus around global climate change, the contribution of human activities to it, and the vulnerability of Maryland’s people, property, public investments, and natural resources, Governor Martin O’Malley established the Maryland Commission on Climate Change on April 20, 2007. The Commission produced a Plan of Action that included a comprehensive climate change impact assessment, a greenhouse gas reduction strategy, and strategies for reducing Maryland’s vulnerability to climate change. The Plan has led to landmark legislation to reduce the state’s greenhouse gas emissions and a variety of state policies designed to reduce energy consumption and promote adaptation to climate change.

  11. USGS Methodology for Assessing Continuous Petroleum Resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  12. Costs of disarmament - Rethinking the price tag: A methodological inquiry into the costs and benefits of arms control

    International Nuclear Information System (INIS)

    Willett, S.

    2002-06-01

    The growing number of arms control and disarmament treaties agreed on over the past decades as well as rising concerns about harmful environmental and public health effects of weapons disposal, have understandably led to an increase in the cost of implementing arms control agreements. As a result, the expenses associated with treaty compliance have emerged as a contentious issue within the realm of arms control and disarmament discussions. In particular, opponents of arms control and disarmament point to perceived rising costs of meeting current and proposed treaty obligations in an attempt to limit and undermine such activities. Yet determining just how much arms control and disarmament cost remains very much an ambiguous task. In Costs of Disarmament - Rethinking the Price Tag: A Methodological Inquiry into the Costs and Benefits of Arms Control, Susan Willett addresses the question of how the cost of arms control ought to be measured. Emphasizing the proper allocation of costs associated with arms control treaty implementation to the life cycle costs of weapon systems and their correct weighing against the benefits they procure in terms of averted arms races and increased international security, Willett argues for a revised methodology of costing arms control and disarmament that gives a more accurate - and significantly lower - estimate of the latter. Adopting such a revised methodology concludes the author, might dispel considerable misunderstanding and help point decisions over arms control and disarmament in the right direction

  13. Statistical significant change versus relevant or important change in (quasi) experimental design : some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research

    NARCIS (Netherlands)

    Middel, Berrie; van Sonderen, Eric

    2002-01-01

    This paper aims to identify problems in estimating and the interpretation of the magnitude of intervention-related change over time or responsiveness assessed with health outcome measures. Responsiveness is a problematic construct and there is no consensus on how to quantify the appropriate index to

  14. Development of extreme rainfall PRA methodology for sodium-cooled fast reactor

    International Nuclear Information System (INIS)

    Nishino, Hiroyuki; Kurisaka, Kenichi; Yamano, Hidemasa

    2016-01-01

    The objective of this study is to develop a probabilistic risk assessment (PRA) methodology for extreme rainfall with focusing on decay heat removal system of a sodium-cooled fast reactor. For the extreme rainfall, annual excess probability depending on the hazard intensity was statistically estimated based on meteorological data. To identify core damage sequence, event trees were developed by assuming scenarios that structures, systems and components (SSCs) important to safety are flooded with rainwater coming into the buildings through gaps in the doors and the SSCs fail when the level of rainwater on the ground or on the roof of the building becomes higher than thresholds of doors on first floor or on the roof during the rainfall. To estimate the failure probability of the SSCs, the level of water rise was estimated by comparing the difference between precipitation and drainage capacity. By combining annual excess probability and the failure probability of SSCs, the event trees led to quantification of core damage frequency, and therefore the PRA methodology for rainfall was developed. (author)

  15. Photovoltaic module energy rating methodology development

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L. [National Renewable Energy Lab., Golden, CO (United States); Whitaker, C.; Newmiller, J. [Endecon Engineering, San Ramon, CA (United States)

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  16. Demonstration of an infiltration evaluation methodology

    International Nuclear Information System (INIS)

    Smyth, J.D.; Gee, G.W.; Kincaid, C.T.; Nichols, W.M.; Bresler, E.

    1990-07-01

    An Infiltration Evaluation Methodology (IEM) was developed for the US Nuclear Regulatory Commission (NRC) by Pacific Northwest Laboratory (PNL) to provide a consistent, well formulated approach for evaluating drainage through engineered covers at low-level radioactive waste (LLW) sites. The methodology is designed to help evaluate the ability of proposed waste site covers to minimize drainage for LLW site license applications and for sites associated with the Uranium Mill Tailings Remedial Action (UMTRA) program. The objective of this methodology is to estimate the drainage through an engineered burial site cover system. The drainage estimate can be used as an input to a broader performance assessment methodology currently under development by the NRC. The methodology is designed to simulate, at the field scale, significant factors and hydrologic conditions which determine or influence estimates of infiltration, long-term moisture content profiles, and drainage from engineered covers and barriers. The IEM developed under this study acknowledges the uncertainty inherent in soil properties and quantifies the influence of such uncertainty on the estimates of drainage in engineered cover systems at waste disposal sites. 6 refs., 1 fig

  17. The Impact of Sea Level Rise on Florida's Everglades

    Science.gov (United States)

    Senarath, S. U.

    2005-12-01

    Global warming and the resulting melting of polar ice sheets could increase global sea levels significantly. Some studies have predicted mean sea level increases in the order of six inches to one foot in the next 25 to 50 years. This could have severe irreversible impacts on low-lying areas of Florida's Everglades. The key objective of this study is to evaluate the effects of a one foot sea level rise on Cape Sable Seaside Sparrow (CSSS) nesting areas within the Everglades National Park (ENP). A regional-scale hydrologic model is used to assess the sensitivities of this sea-level rise scenario. Florida's Everglades supports a unique ecosystem. At present, about 50 percent of this unique ecosystem has been lost due to urbanization and farming. Today, the water flow in the remnant Everglades is also regulated to meet a variety of competing environmental, water-supply and flood-control needs. A 30-year, eight billion dollar (1999 estimate) project has been initiated to improve Everglades' water flows. The expected benefits of this restoration project will be short-lived if the predicted sea level rise causes severe impacts on the environmentally sensitive areas of the Everglades. Florida's Everglades is home to many threatened and endangered species of wildlife. The Cape Sable Seaside Sparrow population in the ENP is one such species that is currently listed as endangered. Since these birds build their nests close to the ground surface (the base of the nest is approximately six inches from the ground surface), they are directly affected by any sea level induced ponding depth, frequency or duration change. Therefore, the CSSS population serves as a good indicator species for evaluating the negative impacts of sea level rise on the Everglades' ecosystem. The impact of sea level rise on the CSSS habitat is evaluated using the Regional Simulation Model (RSM) developed by the South Florida Water Management District. The RSM is an implicit, finite-volume, continuous

  18. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  19. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    Directory of Open Access Journals (Sweden)

    Fang Chen

    Full Text Available Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419, we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010, and estimated an additive heritability of 44.7% (se: 3.7% for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1 whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2 whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the

  20. Influence of plume rise on the consequences of radioactive material releases

    International Nuclear Information System (INIS)

    Russo, A.J.; Wayland, J.R.; Ritchie, L.T.

    1977-01-01

    Estimates of health consequences resulting from a postulated nuclear reactor accident can be strongly dependent on the buoyant rise of the plume of released radioactive material. The sensitivity of the consequences of a postulated accident to two different plume rise models has been investigated. The results of these investigations are the subject of this report. One of the models includes the effects of emission angle, momentum, and radioactive heating of the released material. The difference in the consequence estimates from the two models can exceed an order of magnitude under some conditions, but in general the results are similar

  1. Coastal sea level rise with warming above 2 °C.

    Science.gov (United States)

    Jevrejeva, Svetlana; Jackson, Luke P; Riva, Riccardo E M; Grinsted, Aslak; Moore, John C

    2016-11-22

    Two degrees of global warming above the preindustrial level is widely suggested as an appropriate threshold beyond which climate change risks become unacceptably high. This "2 °C" threshold is likely to be reached between 2040 and 2050 for both Representative Concentration Pathway (RCP) 8.5 and 4.5. Resulting sea level rises will not be globally uniform, due to ocean dynamical processes and changes in gravity associated with water mass redistribution. Here we provide probabilistic sea level rise projections for the global coastline with warming above the 2 °C goal. By 2040, with a 2 °C warming under the RCP8.5 scenario, more than 90% of coastal areas will experience sea level rise exceeding the global estimate of 0.2 m, with up to 0.4 m expected along the Atlantic coast of North America and Norway. With a 5 °C rise by 2100, sea level will rise rapidly, reaching 0.9 m (median), and 80% of the coastline will exceed the global sea level rise at the 95th percentile upper limit of 1.8 m. Under RCP8.5, by 2100, New York may expect rises of 1.09 m, Guangzhou may expect rises of 0.91 m, and Lagos may expect rises of 0.90 m, with the 95th percentile upper limit of 2.24 m, 1.93 m, and 1.92 m, respectively. The coastal communities of rapidly expanding cities in the developing world, and vulnerable tropical coastal ecosystems, will have a very limited time after midcentury to adapt to sea level rises unprecedented since the dawn of the Bronze Age.

  2. Introduction to LCA Methodology

    DEFF Research Database (Denmark)

    Hauschild, Michael Z.

    2018-01-01

    In order to offer the reader an overview of the LCA methodology in the preparation of the more detailed description of its different phases, a brief introduction is given to the methodological framework according to the ISO 14040 standard and the main elements of each of its phases. Emphasis...

  3. Methodologies, languages and tools

    International Nuclear Information System (INIS)

    Amako, Katsuya

    1994-01-01

    This is a summary of the open-quotes Methodologies, Languages and Toolsclose quotes session in the CHEP'94 conference. All the contributions to methodologies and languages are relevant to the object-oriented approach. Other topics presented are related to various software tools in the down-sized computing environment

  4. Archetype modeling methodology.

    Science.gov (United States)

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Menopause and Methodological Doubt

    Science.gov (United States)

    Spence, Sheila

    2005-01-01

    Menopause and methodological doubt begins by making a tongue-in-cheek comparison between Descartes' methodological doubt and the self-doubt that can arise around menopause. A hermeneutic approach is taken in which Cartesian dualism and its implications for the way women are viewed in society are examined, both through the experiences of women…

  6. VEM: Virtual Enterprise Methodology

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a virtual enterprise methodology (VEM) that outlines activities to consider when setting up and managing virtual enterprises (VEs). As a methodology the VEM helps companies to ask the right questions when preparing for and setting up an enterprise network, which works...

  7. Data Centric Development Methodology

    Science.gov (United States)

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  8. The Methodology of Magpies

    Science.gov (United States)

    Carter, Susan

    2014-01-01

    Arts/Humanities researchers frequently do not explain methodology overtly; instead, they "perform" it through their use of language, textual and historic cross-reference, and theory. Here, methodologies from literary studies are shown to add to Higher Education (HE) an exegetical and critically pluralist approach. This includes…

  9. Reconciling projections of the Antarctic contribution to sea level rise

    Science.gov (United States)

    Edwards, Tamsin; Holden, Philip; Edwards, Neil; Wernecke, Andreas

    2017-04-01

    Two recent studies of the Antarctic contribution to sea level rise this century had best estimates that differed by an order of magnitude (around 10 cm and 1 m by 2100). The first, Ritz et al. (2015), used a model calibrated with satellite data, giving a 5% probability of exceeding 30cm by 2100 for sea level rise due to Antarctic instability. The second, DeConto and Pollard (2016), used a model evaluated with reconstructions of palaeo-sea level. They did not estimate probabilities, but using a simple assumption here about the distribution shape gives up to a 5% chance of Antarctic contribution exceeding 2.3 m this century with total sea level rise approaching 3 m. If robust, this would have very substantial implications for global adaptation to climate change. How are we to make sense of this apparent inconsistency? How much is down to the data - does the past tell us we will face widespread and rapid Antarctic ice losses in the future? How much is due to the mechanism of rapid ice loss ('cliff failure') proposed in the latter paper, or other parameterisation choices in these low resolution models (GRISLI and PISM, respectively)? How much is due to choices made in the ensemble design and calibration? How do these projections compare with high resolution, grounding line resolving models such as BISICLES? Could we reduce the huge uncertainties in the palaeo-study? Emulation provides a powerful tool for understanding these questions and reconciling the projections. By describing the three numerical ice sheet models with statistical models, we can re-analyse the ensembles and re-do the calibrations under a common statistical framework. This reduces uncertainty in the PISM study because it allows massive sampling of the parameter space, which reduces the sensitivity to reconstructed palaeo-sea level values and also narrows the probability intervals because the simple assumption about distribution shape above is no longer needed. We present reconciled probabilistic

  10. Rise of oil prices and energy policy

    International Nuclear Information System (INIS)

    2005-01-01

    This document reprints the talk of the press conference given by D. de Villepin, French prime minister, on August 16, 2005 about the alarming rise of oil prices. In his talk, the prime minister explains the reasons of the crisis (increase of worldwide consumption, political tensions in the Middle East..) and presents the strategy and main trends of the French energy policy: re-launching of energy investments in petroleum refining capacities and in the nuclear domain (new generation of power plants), development of renewable energy sources and in particular biofuels, re-launching of the energy saving policy thanks to financial incentives and to the development of clean vehicles and mass transportation systems. In a second part, the prime minister presents his policy of retro-cession of petroleum tax profits to low income workers, and of charge abatement to professionals having an occupation strongly penalized by the rise of oil prices (truckers, farmers, fishermen, taxi drivers). (J.S.)

  11. High and rising health care costs.

    Science.gov (United States)

    Ginsburg, Paul B

    2008-10-01

    The U.S. is spending a growing share of the GDP on health care, outpacing other industrialized countries. This synthesis examines why costs are higher in the U.S. and what is driving their growth. Key findings include: health care inefficiency, medical technology and health status (particularly obesity) are the primary drivers of rising U.S. health care costs. Health payer systems that reward inefficiencies and preempt competition have impeded productivity gains in the health care sector. The best evidence indicates medical technology accounts for one-half to two-thirds of spending growth. While medical malpractice insurance and defensive medicine contribute to health costs, they are not large enough factors to significantly contribute to a rise in spending. Research is consistent that demographics will not be a significant factor in driving spending despite the aging baby boomers.

  12. Compton suppression through rise-time analysis

    International Nuclear Information System (INIS)

    Selvi, S.; Celiktas, C.

    2007-01-01

    We studied Compton suppression for 60 Co and 137 Cs radioisotopes using a signal selection criterion based on contrasting the fall time of the signals composing the photo peak with those composing the Compton continuum. The fall time criterion is employed by using the pulse shape analysis observing the change in the fall times of the gamma-ray pulses. This change is determined by measuring the changes in the rise times related to the fall time of the scintillator and the timing signals related to the fall time of the input signals. We showed that Compton continuum suppression is achieved best via the precise timing adjustment of an analog rise-time analyzer connected to a NaI(Tl) scintillation spectrometer

  13. Automated remedial assessment methodology software system

    International Nuclear Information System (INIS)

    Whiting, M.; Wilkins, M.; Stiles, D.

    1994-11-01

    The Automated Remedial Analysis Methodology (ARAM) software system has been developed by the Pacific Northwest Laboratory to assist the U.S. Department of Energy (DOE) in evaluating cleanup options for over 10,000 contaminated sites across the DOE complex. The automated methodology comprises modules for decision logic diagrams, technology applicability and effectiveness rules, mass balance equations, cost and labor estimating factors and equations, and contaminant stream routing. ARAM is used to select technologies for meeting cleanup targets; determine the effectiveness of the technologies in destroying, removing, or immobilizing contaminants; decide the nature and amount of secondary waste requiring further treatment; and estimate the cost and labor involved when applying technologies

  14. The rise of precarious employment in Germany

    OpenAIRE

    Brady, David; Biegert, Thomas

    2017-01-01

    Long considered the classic coordinated market economy featuring employment security and relatively little employment precarity, the German labor market has undergone profound changes in recent decades. We assess the evidence for a rise in precarious employment in Germany from 1984 to 2013. Using data from the German Socio-Economic Panel (SOEP) through the Luxembourg Income Study, we examine low-wage employment, working poverty, and temporary employment. We also analyze changes in the demogra...

  15. Rising sea levels and small island states

    International Nuclear Information System (INIS)

    Leatherman, S.P.

    1994-01-01

    A review is given of the problems small island nations face with respect to sea level rise caused by global warming. Many small island nations are very vulnerable to sea level rise. Particularly at risk are coral reef atolls, which are generally quite small, lie within three metres of current sea levels, and have no land at higher elevations to relocate populations and economic activity. Volcanic islands in the Pacific have high ground, but it is largely rugged, high relief and soil-poor. The most vulnerable islands are those that consist entirely of atolls and reef islands, such as Kirabai, Maldives, Tokelau and Tuvalu. Small island states, which by themselves have little power or influence in world affairs, have banded together to form the Strategic Alliance of Small Island States (AOSIS). This alliance had grown to include 42 states by the time of the 1992 U.N. Earth Summit. Although the greenhouse effect is mainly caused by industrial nations, developing countries will suffer the most from it. Choices of response strategy will depend on environmental, economic and social factors. Most small island nations do not have the resources to fight sea level rise in the way that the Dutch have. Retreat can occur as a gradual process or as catastrophic abandonment. Prohibiting construction close to the water's edge is a good approach. Sea level histories for each island state should be compiled and updated, island geomorphology and settlement patterns should be surveyed to determine risk areas, storm regimes should be determined, and information on coastal impacts of sea level rise should be disseminated to the public

  16. Rugged calorimeter with a fast rise time

    International Nuclear Information System (INIS)

    McMurtry, W.M.; Dolce, S.R.

    1980-01-01

    An intrinsic 1-mil-thick gold foil calorimeter has been developed which rises to 95% of the energy deposited in less than 2 microseconds. This calorimeter is very rugged, and can withstand rough handling without damage. The time constant is long, in the millisecond range, because of its unique construction. Use of this calorimeter has produced 100% data recovery, and agreement with true deposition to less than 10%

  17. The methodology proposed to estimate the absorbed dose at the entrance of the labyrinth in HDR brachytherapy facilities with IR-192; Propuesta de metodologia para estimar la dosis absorbida en la entrada del laberinto en instalaciones de braquiterapia HDR con Ir-192

    Energy Technology Data Exchange (ETDEWEB)

    Pujades, M. C.; Perez-Calatayud, J.; Ballester, F.

    2012-07-01

    In the absence of procedure for evaluating the design of a brachytherapy (BT) vault with maze from the point of view of radiation protection, usually formalism of external radiation is adapted. The purpose of this study is to adapt the methodology described by the National council on Radiological Protection and Measurements Report 151 (NCRP 151). Structural Shielding Design for megavoltage X-and Gamma-Ray Radiotherapy facilities, for estimating dose at the door in BT and its comparison with the results megavoltage X-and Gamma-Ray Radiotherapy Facilities, for estimating dose at the door in BT and its comparison with the results obtained by the method of Monte Carlo (MC) for a special case of bunker. (Author) 17 refs.

  18. Screening radon risks: A methodology for policymakers

    International Nuclear Information System (INIS)

    Eisinger, D.S.; Simmons, R.A.; Lammering, M.; Sotiros, R.

    1991-01-01

    This paper provides an easy-to-use screening methodology to estimate potential excess lifetime lung cancer risk resulting from indoor radon exposure. The methodology was developed under U.S. EPA Office of Policy, Planning, and Evaluation sponsorship of the agency's Integrated Environmental Management Projects (IEMP) and State/Regional Comparative Risk Projects. These projects help policymakers understand and use scientific data to develop environmental problem-solving strategies. This research presents the risk assessment methodology, discusses its basis, and identifies appropriate applications. The paper also identifies assumptions built into the methodology and qualitatively addresses methodological uncertainties, the direction in which these uncertainties could bias analyses, and their relative importance. The methodology draws from several sources, including risk assessment formulations developed by the U.S. EPA's Office of Radiation Programs, the EPA's Integrated Environmental Management Project (Denver), the International Commission on Radiological Protection, and the National Institute for Occupational Safety and Health. When constructed as a spreadsheet program, the methodology easily facilitates analyses and sensitivity studies (the paper includes several sensitivity study options). The methodology will be most helpful to those who need to make decisions concerning radon testing, public education, and exposure prevention and mitigation programs.26 references

  19. High-rise construction in the Russian economy: modeling of management decisions

    Science.gov (United States)

    Miroshnikova, Tatyana; Taskaeva, Natalia

    2018-03-01

    The growth in the building industry, particularly in residential high-rise construction, is having considerable influence on the country's economic development. The scientific hypothesis of the research is that the execution of town-planning programs of high-rise construction depends to