WorldWideScience

Sample records for model hazard ratio

  1. Flash Flood Hazard Susceptibility Mapping Using Frequency Ratio and Statistical Index Methods in Coalmine Subsidence Areas

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2016-09-01

    Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.

  2. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  3. Hazard Warning: model misuse ahead

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Payne, Mark; Trenkel, V.

    2014-01-01

    The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based...

  4. Geospatial subsidence hazard modelling at Sterkfontein Caves ...

    African Journals Online (AJOL)

    The geo-hazard subsidence model includes historic subsidence occurrances, terrain (water flow) and water accumulation. Water accumulating on the surface will percolate and reduce the strength of the soil mass, possibly inducing subsidence. Areas for further geotechnical investigation are identified, demonstrating that a ...

  5. Hazard identification based on plant functional modelling

    International Nuclear Information System (INIS)

    Rasmussen, B.; Whetton, C.

    1993-10-01

    A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)

  6. The New Zealand probabilistic tsunami hazard model

    Science.gov (United States)

    Power, W. L.; Mueller, C.; Barberopoulou, A.; Wallace, L. M.; Wang, X.; Fraser, S. A.

    2012-12-01

    Effective mitigation of the risks posed by tsunami is an urgent priority for New Zealand, a country straddling the Pacific 'Ring of Fire' and its associated subduction zones. Methods of mitigation, which are in various stages of development, include evacuation mapping, land use planning, and engineering of tsunami resilient buildings and infrastructure. But for this mitigation to be effective an accurate estimate of the hazard posed by tsunamis is needed. This is the motivation behind the New Zealand probabilistic tsunami hazard model. The model considers all types of seismic tsunami sources, whether local, regional or distant to New Zealand. The potential for including other source types, such as landslide and volcanic sources, will be briefly discussed. A critical issue when defining tsunami sources for New Zealand is that the magnitude-frequency distributions of many key seismic sources are not accurately known. For the subduction interfaces and other offshore faults close to New Zealand the historical record of tsunamis is too short to derive magnitude frequency distributions empirically, while the paleotsunami record is incomplete. Fortunately some of the parameters that determine and constrain the magnitude frequency distributions can be estimated, albeit with uncertainty. We present a Monte-Carlo method in which those controlling parameters are randomly sampled, which leads to a process for sampling from the range of different possible magnitude-frequency distributions. Our Monte Carlo method requires the generation of many synthetic catalogues, which require rapid methods for estimating of tsunami heights in each scenario: the methods used for this purpose will be presented. The outputs from our probabilistic model can be presented as hazard curves, describing tsunami height as a function of return period for each section of the coast; these hazard curves include 'error bars' as determined by the uncertainties incorporated in our Monte-Carlo model. Most

  7. Model building in nonproportional hazard regression.

    Science.gov (United States)

    Rodríguez-Girondo, Mar; Kneib, Thomas; Cadarso-Suárez, Carmen; Abu-Assi, Emad

    2013-12-30

    Recent developments of statistical methods allow for a very flexible modeling of covariates affecting survival times via the hazard rate, including also the inspection of possible time-dependent associations. Despite their immediate appeal in terms of flexibility, these models typically introduce additional difficulties when a subset of covariates and the corresponding modeling alternatives have to be chosen, that is, for building the most suitable model for given data. This is particularly true when potentially time-varying associations are given. We propose to conduct a piecewise exponential representation of the original survival data to link hazard regression with estimation schemes based on of the Poisson likelihood to make recent advances for model building in exponential family regression accessible also in the nonproportional hazard regression context. A two-stage stepwise selection approach, an approach based on doubly penalized likelihood, and a componentwise functional gradient descent approach are adapted to the piecewise exponential regression problem. These three techniques were compared via an intensive simulation study. An application to prognosis after discharge for patients who suffered a myocardial infarction supplements the simulation to demonstrate the pros and cons of the approaches in real data analyses. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  9. The New Italian Seismic Hazard Model

    Science.gov (United States)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme

  10. The 2013 European Seismic Hazard Model: key components and results

    OpenAIRE

    Jochen Woessner; Danciu Laurentiu; Domenico Giardini; Helen Crowley; Fabrice Cotton; G. Grünthal; Gianluca Valensise; Ronald Arvidsson; Roberto Basili; Mine Betül Demircioglu; Stefan Hiemer; Carlo Meletti; Roger W. Musson; Andrea N. Rovida; Karin Sesetyan

    2015-01-01

    The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project “Seismic Hazard Harmonization in Europe” (SHARE, 2009–2013). The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the “Global Earthquake Model” initiative. It m...

  11. Comparison of Fuzzy-Based Models in Landslide Hazard Mapping

    Science.gov (United States)

    Mijani, N.; Neysani Samani, N.

    2017-09-01

    Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP), Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR) and Quality Sum (QS). The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P) and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  12. Bibliography - Existing Guidance for External Hazard Modelling

    International Nuclear Information System (INIS)

    Decker, Kurt

    2015-01-01

    The bibliography of deliverable D21.1 includes existing international and national guidance documents and standards on external hazard assessment together with a selection of recent scientific papers, which are regarded to provide useful information on the state of the art of external event modelling. The literature database is subdivided into International Standards, National Standards, and Science Papers. The deliverable is treated as a 'living document' which is regularly updated as necessary during the lifetime of ASAMPSA-E. The current content of the database is about 140 papers. Most of the articles are available as full-text versions in PDF format. The deliverable is available as an EndNote X4 database and as text files. The database includes the following information: Reference, Key words, Abstract (if available), PDF file of the original paper (if available), Notes (comments by the ASAMPSA-E consortium if available) The database is stored at the ASAMPSA-E FTP server hosted by IRSN. PDF files of original papers are accessible through the EndNote software

  13. Correcting hazard ratio estimates for outcome misclassification using multiple imputation with internal validation data.

    Science.gov (United States)

    Ni, Jiayi; Leong, Aaron; Dasgupta, Kaberi; Rahme, Elham

    2017-08-01

    Outcome misclassification may occur in observational studies using administrative databases. We evaluated a two-step multiple imputation approach based on complementary internal validation data obtained from two subsamples of study participants to reduce bias in hazard ratio (HR) estimates in Cox regressions. We illustrated this approach using data from a surveyed sample of 6247 individuals in a study of statin-diabetes association in Quebec. We corrected diabetes status and onset assessed from health administrative data against self-reported diabetes and/or elevated fasting blood glucose (FBG) assessed in subsamples. The association between statin use and new onset diabetes was evaluated using administrative data and the corrected data. By simulation, we assessed the performance of this method varying the true HR, sensitivity, specificity, and the size of validation subsamples. The adjusted HR of new onset diabetes among statin users versus non-users was 1.61 (95% confidence interval: 1.09-2.38) using administrative data only, 1.49 (0.95-2.34) when diabetes status and onset were corrected based on self-report and undiagnosed diabetes (FBG ≥ 7 mmol/L), and 1.36 (0.92-2.01) when corrected for self-report and undiagnosed diabetes/impaired FBG (≥ 6 mmol/L). In simulations, the multiple imputation approach yielded less biased HR estimates and appropriate coverage for both non-differential and differential misclassification. Large variations in the corrected HR estimates were observed using validation subsamples with low participation proportion. The bias correction was sometimes outweighed by the uncertainty introduced by the unknown time of event occurrence. Multiple imputation is useful to correct for outcome misclassification in time-to-event analyses if complementary validation data are available from subsamples. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  15. Mathematical modeling of potentially hazardous nuclear objects with time shifts

    International Nuclear Information System (INIS)

    Gharakhanlou, J.; Kazachkov, I.V.

    2012-01-01

    The aggregate models for potentially hazardous objects with time shifts are used for mathematical modeling and computer simulation. The effects of time delays are time forecasts are analyzed. The influence of shift arguments on the nonlinear differential equations is discussed. Computer simulation has established the behavior of potentially hazardous nuclear object

  16. A Model for Generating Multi-hazard Scenarios

    Science.gov (United States)

    Lo Jacomo, A.; Han, D.; Champneys, A.

    2017-12-01

    Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.

  17. Modelling direct tangible damages due to natural hazards

    Science.gov (United States)

    Kreibich, H.; Bubeck, P.

    2012-04-01

    Europe has witnessed a significant increase in direct damages from natural hazards. A further damage increase is expected due to the on-going accumulation of people and economic assets in risk-prone areas and the effects of climate change, for instance, on the severity and frequency of drought events in the Mediterranean basin. In order to mitigate the impact of natural hazards an improved risk management based on reliable risk analysis is needed. Particularly, there is still much research effort needed to improve the modelling of damage due to natural hazards. In comparison with hazard modelling, simple approaches still dominate damage assessments, mainly due to limitations in available data and knowledge on damaging processes and influencing factors. Within the EU-project ConHaz, methods as well as data sources and terminology for damage assessments were compiled, systemized and analysed. Similarities and differences between the approaches concerning floods, alpine hazards, coastal hazards and droughts were identified. Approaches for significant improvements of direct tangible damage modelling with a particular focus on cross-hazard-learning will be presented. Examples from different hazards and countries will be given how to improve damage data bases, the understanding of damaging processes, damage models and how to conduct improvements via validations and uncertainty analyses.

  18. Technology Learning Ratios in Global Energy Models

    International Nuclear Information System (INIS)

    Varela, M.

    2001-01-01

    The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this trend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy system including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs

  19. An optimization model for transportation of hazardous materials

    International Nuclear Information System (INIS)

    Seyed-Hosseini, M.; Kheirkhah, A. S.

    2005-01-01

    In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model

  20. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  1. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    Science.gov (United States)

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  2. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  3. A conflict model for the international hazardous waste disposal dispute

    International Nuclear Information System (INIS)

    Hu Kaixian; Hipel, Keith W.; Fang, Liping

    2009-01-01

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  4. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  5. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  6. Ratio

    Science.gov (United States)

    Webster, Nathan A. S.; Pownceby, Mark I.; Madsen, Ian C.; Studer, Andrew J.; Manuel, James R.; Kimpton, Justin A.

    2014-12-01

    Effects of basicity, B (CaO:SiO2 ratio) on the thermal range, concentration, and formation mechanisms of silico-ferrite of calcium and aluminum (SFCA) and SFCA-I iron ore sinter bonding phases have been investigated using an in situ synchrotron X-ray diffraction-based methodology with subsequent Rietveld refinement-based quantitative phase analysis. SFCA and SFCA-I phases are the key bonding materials in iron ore sinter, and improved understanding of the effects of processing parameters such as basicity on their formation and decomposition may assist in improving efficiency of industrial iron ore sintering operations. Increasing basicity significantly increased the thermal range of SFCA-I, from 1363 K to 1533 K (1090 °C to 1260 °C) for a mixture with B = 2.48, to ~1339 K to 1535 K (1066 °C to 1262 °C) for a mixture with B = 3.96, and to ~1323 K to 1593 K (1050 °C to 1320 °C) at B = 4.94. Increasing basicity also increased the amount of SFCA-I formed, from 18 wt pct for the mixture with B = 2.48 to 25 wt pct for the B = 4.94 mixture. Higher basicity of the starting sinter mixture will, therefore, increase the amount of SFCA-I, considered to be more desirable of the two phases. Basicity did not appear to significantly influence the formation mechanism of SFCA-I. It did, however, affect the formation mechanism of SFCA, with the decomposition of SFCA-I coinciding with the formation of a significant amount of additional SFCA in the B = 2.48 and 3.96 mixtures but only a minor amount in the highest basicity mixture. In situ neutron diffraction enabled characterization of the behavior of magnetite after melting of SFCA produced a magnetite plus melt phase assemblage.

  7. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  8. TsuPy: Computational robustness in Tsunami hazard modelling

    Science.gov (United States)

    Schäfer, Andreas M.; Wenzel, Friedemann

    2017-05-01

    Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.

  9. Bias in Hazard Ratios Arising From Misclassification According to Self-Reported Weight and Height in Observational Studies of Body Mass Index and Mortality

    Science.gov (United States)

    Kit, Brian K; Graubard, Barry I

    2018-01-01

    Abstract Misclassification of body mass index (BMI) categories arising from self-reported weight and height can bias hazard ratios in studies of BMI and mortality. We examined the effects on hazard ratios of such misclassification using national US survey data for 1976 through 2010 that had both measured and self-reported weight and height along with mortality follow-up for 48,763 adults and a subset of 17,405 healthy never-smokers. BMI was categorized as self-reported data. Both the magnitude and direction of bias varied according to the underlying hazard ratios in measured data, showing that findings on bias from one study should not be extrapolated to a study with different underlying hazard ratios. Because of misclassification effects, self-reported weight and height cannot reliably indicate the lowest-risk BMI category. PMID:29309516

  10. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...

  11. Toward Building a New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  12. Goodness-of-fit test for proportional subdistribution hazards model.

    Science.gov (United States)

    Zhou, Bingqing; Fine, Jason; Laird, Glen

    2013-09-30

    This paper concerns using modified weighted Schoenfeld residuals to test the proportionality of subdistribution hazards for the Fine-Gray model, similar to the tests proposed by Grambsch and Therneau for independently censored data. We develop a score test for the time-varying coefficients based on the modified Schoenfeld residuals derived assuming a certain form of non-proportionality. The methods perform well in simulations and a real data analysis of breast cancer data, where the treatment effect exhibits non-proportional hazards. Copyright © 2013 John Wiley & Sons, Ltd.

  13. The 2014 United States National Seismic Hazard Model

    Science.gov (United States)

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.

    2015-01-01

    New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.

  14. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    Science.gov (United States)

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  15. A New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  16. Standardized binomial models for risk or prevalence ratios and differences.

    Science.gov (United States)

    Richardson, David B; Kinlaw, Alan C; MacLehose, Richard F; Cole, Stephen R

    2015-10-01

    Epidemiologists often analyse binary outcomes in cohort and cross-sectional studies using multivariable logistic regression models, yielding estimates of adjusted odds ratios. It is widely known that the odds ratio closely approximates the risk or prevalence ratio when the outcome is rare, and it does not do so when the outcome is common. Consequently, investigators may decide to directly estimate the risk or prevalence ratio using a log binomial regression model. We describe the use of a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences. We illustrate the proposed approach using data from a cohort study of coronary heart disease status in Evans County, Georgia, USA. The approach reduces problems with model convergence typical of log binomial regression by shifting all explanatory variables except the exposures of primary interest from the linear predictor of the outcome regression model to a model for the standardization weights. The approach also facilitates evaluation of departures from additivity in the joint effects of two exposures. Epidemiologists should consider reporting standardized risk or prevalence ratios and differences in cohort and cross-sectional studies. These are readily-obtained using the SAS, Stata and R statistical software packages. The proposed approach estimates the exposure effect in the total population. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  17. Rockfall hazard analysis using LiDAR and spatial modeling

    Science.gov (United States)

    Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho

    2010-05-01

    Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.

  18. Defaultable Game Options in a Hazard Process Model

    Directory of Open Access Journals (Sweden)

    Tomasz R. Bielecki

    2009-01-01

    Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.

  19. Recent Experiences in Aftershock Hazard Modelling in New Zealand

    Science.gov (United States)

    Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.

    2014-12-01

    The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock

  20. A Monte Carlo methodology for modelling ashfall hazards

    Science.gov (United States)

    Hurst, Tony; Smith, Warwick

    2004-12-01

    We have developed a methodology for quantifying the probability of particular thicknesses of tephra at any given site, using Monte Carlo methods. This is a part of the development of a probabilistic volcanic hazard model (PVHM) for New Zealand, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo procedure allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. This method can handle the effects of multiple volcanic sources, each source with its own characteristics. We accumulate the tephra thicknesses from all sources to estimate the combined ashfall hazard, expressed as the frequency with which any given depth of tephra is likely to be deposited at selected sites. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from sediment cores in Auckland give useful bounds for the likely total volumes erupted from Egmont Volcano (Mt. Taranaki), 280 km away, during the last 130,000 years.

  1. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  2. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  3. Modelling public risk evaluation of natural hazards: a conceptual approach

    Science.gov (United States)

    Plattner, Th.

    2005-04-01

    In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.

  4. Report 6: Guidance document. Man-made hazards and Accidental Aircraft Crash hazards modelling and implementation in extended PSA

    International Nuclear Information System (INIS)

    Kahia, S.; Brinkman, H.; Bareith, A.; Siklossy, T.; Vinot, T.; Mateescu, T.; Espargilliere, J.; Burgazzi, L.; Ivanov, I.; Bogdanov, D.; Groudev, P.; Ostapchuk, S.; Zhabin, O.; Stojka, T.; Alzbutas, R.; Kumar, M.; Nitoi, M.; Farcasiu, M.; Borysiewicz, M.; Kowal, K.; Potempski, S.

    2016-01-01

    The goal of this report is to provide guidance on practices to model man-made hazards (mainly external fires and explosions) and accidental aircraft crash hazards and implement them in extended Level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the first ASAMPSA-E End Users Workshop (May 2014, Uppsala, Sweden). The objective of WP22 is to provide the solutions for purposes of different parts of man-made hazards Level 1 PSA fulfilment. This guidance is focusing on man-made hazards, namely: external fires and explosions, and accidental aircraft crash hazards. Guidance developed refers to existing guidance whenever possible. The initial part of guidance (WP21 part) reflects current practices to assess the frequencies for each type of hazards or combination of hazards (including correlated hazards) as initiating event for PSAs. The sources and quality of hazard data, the elements of hazard assessment methodologies and relevant examples are discussed. Classification and criteria to properly assess hazard combinations as well as examples and methods for assessment of these combinations are included in this guidance. In appendixes additional material is presented with the examples of practical approaches to aircraft crash and man-made hazard. The following issues are addressed: 1) Hazard assessment methodologies, including issues related to hazard combinations. 2) Modelling equipment of safety related SSC, 3) HRA, 4) Emergency response, 5) Multi-unit issues. Recommendations and also limitations, gaps identified in the existing methodologies and a list of open issues are included. At all stages of this guidance and especially from an industrial end-user perspective, one must keep in mind that the development of man-made hazards probabilistic analysis must be conditioned to the ability to ultimately obtain a representative risk

  5. A decision model for the risk management of hazardous processes

    International Nuclear Information System (INIS)

    Holmberg, J.E.

    1997-03-01

    A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)

  6. Uncertainties in modeling hazardous gas releases for emergency response

    Directory of Open Access Journals (Sweden)

    Kathrin Baumann-Stanzer

    2011-02-01

    Full Text Available In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness lengths indicate the model sensitivity to these input parameters. In-situ measurements at two urban near-traffic sites are compared to results of the Integrated Nowcasting through Comprehensive Analysis (INCA in order to quantify uncertainties in the meteorological input. The hazard zone estimates from the models vary up to a factor of 4 due to different input requirements as well as due to different internal model assumptions. None of the models is found to be 'more conservative' than the others in all scenarios. INCA wind-speeds are correlated to in-situ observations at two urban sites in Vienna with a factor of 0.89. The standard deviations of the normal error distribution are 0.8 ms-1 in wind speed, on the scale of 50 degrees in wind direction, up to 4°C in air temperature and up to 10 % in relative humidity. The observed air temperature and humidity are well reproduced by INCA with correlation coefficients of 0.96 to 0.99. INCA is therefore found to give a good representation of the local meteorological conditions. Besides of real-time data, the INCA-short range forecast for the following hours may support the action planning of the first responders.

  7. Uncertainties in modeling hazardous gas releases for emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Baumann-Stanzer, Kathrin; Stenzel, Sirma [Zentralanstalt fuer Meteorologie und Geodynamik, Vienna (Austria)

    2011-02-15

    In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness lengths indicate the model sensitivity to these input parameters. In-situ measurements at two urban near-traffic sites are compared to results of the Integrated Nowcasting through Comprehensive Analysis (INCA) in order to quantify uncertainties in the meteorological input. The hazard zone estimates from the models vary up to a factor of 4 due to different input requirements as well as due to different internal model assumptions. None of the models is found to be 'more conservative' than the others in all scenarios. INCA wind-speeds are correlated to in-situ observations at two urban sites in Vienna with a factor of 0.89. The standard deviations of the normal error distribution are 0.8 ms{sup -1} in wind speed, on the scale of 50 degrees in wind direction, up to 4 C in air temperature and up to 10 % in relative humidity. The observed air temperature and humidity are well reproduced by INCA with correlation coefficients of 0.96 to 0.99. INCA is therefore found to give a good representation of the local meteorological conditions. Besides of real-time data, the INCA-short range forecast for the following hours may support the action planning of the first responders. (orig.)

  8. Restricted mean survival time: an alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome.

    Science.gov (United States)

    Royston, Patrick; Parmar, Mahesh K B

    2013-12-07

    Designs and analyses of clinical trials with a time-to-event outcome almost invariably rely on the hazard ratio to estimate the treatment effect and implicitly, therefore, on the proportional hazards assumption. However, the results of some recent trials indicate that there is no guarantee that the assumption will hold. Here, we describe the use of the restricted mean survival time as a possible alternative tool in the design and analysis of these trials. The restricted mean is a measure of average survival from time 0 to a specified time point, and may be estimated as the area under the survival curve up to that point. We consider the design of such trials according to a wide range of possible survival distributions in the control and research arm(s). The distributions are conveniently defined as piecewise exponential distributions and can be specified through piecewise constant hazards and time-fixed or time-dependent hazard ratios. Such designs can embody proportional or non-proportional hazards of the treatment effect. We demonstrate the use of restricted mean survival time and a test of the difference in restricted means as an alternative measure of treatment effect. We support the approach through the results of simulation studies and in real examples from several cancer trials. We illustrate the required sample size under proportional and non-proportional hazards, also the significance level and power of the proposed test. Values are compared with those from the standard approach which utilizes the logrank test. We conclude that the hazard ratio cannot be recommended as a general measure of the treatment effect in a randomized controlled trial, nor is it always appropriate when designing a trial. Restricted mean survival time may provide a practical way forward and deserves greater attention.

  9. Modeling emergency evacuation for major hazard industrial sites

    International Nuclear Information System (INIS)

    Georgiadou, Paraskevi S.; Papazoglou, Ioannis A.; Kiranoudis, Chris T.; Markatos, Nikolaos C.

    2007-01-01

    A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies

  10. Opinion: The use of natural hazard modeling for decision making under uncertainty

    Science.gov (United States)

    David E. Calkin; Mike Mentis

    2015-01-01

    Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...

  11. Applied Prevalence Ratio estimation with different Regression models: An example from a cross-national study on substance use research.

    Science.gov (United States)

    Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina

    2016-06-14

    To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.

  12. A multimodal location and routing model for hazardous materials transportation.

    Science.gov (United States)

    Xie, Yuanchang; Lu, Wei; Wang, Wen; Quadrifoglio, Luca

    2012-08-15

    The recent US Commodity Flow Survey data suggest that transporting hazardous materials (HAZMAT) often involves multiple modes, especially for long-distance transportation. However, not much research has been conducted on HAZMAT location and routing on a multimodal transportation network. Most existing HAZMAT location and routing studies focus exclusively on single mode (either highways or railways). Motivated by the lack of research on multimodal HAZMAT location and routing and the fact that there is an increasing demand for it, this research proposes a multimodal HAZMAT model that simultaneously optimizes the locations of transfer yards and transportation routes. The developed model is applied to two case studies of different network sizes to demonstrate its applicability. The results are analyzed and suggestions for future research are provided. Published by Elsevier B.V.

  13. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  14. A simple GMM estimator for the semi-parametric mixed proportional hazard model

    OpenAIRE

    Bijwaard, G.E.; Ridder, G.; Woutersen, T.

    2012-01-01

    Ridder and Woutersen (Ridder, G., and T. Woutersen. 2003. “The Singularity of the Efficiency Bound of the Mixed Proportional Hazard Model.” Econometrica 71: 1579–1589) have shown that under a weak condition on the baseline hazard, there exist root-N consistent estimators of the parameters in a semiparametric Mixed Proportional Hazard model with a parametric baseline hazard and unspecified distribution of the unobserved heterogeneity. We extend the linear rank estimator (LRE) of Tsiatis (Tsiat...

  15. VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation

    Science.gov (United States)

    Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.

    2009-12-01

    Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the

  16. A Model Suggestion to Predict Leverage Ratio for Construction Projects

    Directory of Open Access Journals (Sweden)

    Özlem Tüz

    2013-12-01

    Full Text Available Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of the money. In this study, monitoring and measuring of the leverage ratio because of the displacement in cash inflows of construction projects which uses high leverage and low cash to do business in the sector is targeted. Cash need because of drifting the cash inflows may be seen due to the model. Work should be done in the early stages of the project with little capital but in the later stages, rapidly growing capital need arises.The values obtained from the model may be used to supply the capital held in the right time by anticipating the risks because of the delay in cashflow of construction projects which uses high leverage ratio.

  17. Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank

    2008-11-01

    Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

  18. Preliminary deformation model for National Seismic Hazard map of Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)

    2015-04-24

    Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.

  19. Modelling Of Anticipated Damage Ratio On Breakwaters Using Fuzzy Logic

    Science.gov (United States)

    Mercan, D. E.; Yagci, O.; Kabdasli, S.

    2003-04-01

    In breakwater design the determination of armour unit weight is especially important in terms of the structure's life. In a typical experimental breakwater stability study, different wave series composed of different wave heights; wave period and wave steepness characteristics are applied in order to investigate performance the structure. Using a classical approach, a regression equation is generated for damage ratio as a function of characteristic wave height. The parameters wave period and wave steepness are not considered. In this study, differing from the classical approach using a fuzzy logic, a relationship between damage ratio as a function of mean wave period (T_m), wave steepness (H_s/L_m) and significant wave height (H_s) was further generated. The system's inputs were mean wave period (T_m), wave steepness (H_s/L_m) and significant wave height (H_s). For fuzzification all input variables were divided into three fuzzy subsets, their membership functions were defined using method developed by Mandani (Mandani, 1974) and the rules were written. While for defuzzification the centroid method was used. In order to calibrate and test the generated models an experimental study was conducted. The experiments were performed in a wave flume (24 m long, 1.0 m wide and 1.0 m high) using 20 different irregular wave series (P-M spectrum). Throughout the study, the water depth was 0.6 m and the breakwater cross-sectional slope was 1V/2H. In the armour layer, a type of artificial armour unit known as antifer cubes were used. The results of the established fuzzy logic model and regression equation model was compared with experimental data and it was determined that the established fuzzy logic model gave a more accurate prediction of the damage ratio on this type of breakwater. References Mandani, E.H., "Application of Fuzzy Algorithms for Control of Simple Dynamic Plant", Proc. IEE, vol. 121, no. 12, December 1974.

  20. A Probabilistic Tsunami Hazard Study of the Auckland Region, Part I: Propagation Modelling and Tsunami Hazard Assessment at the Shoreline

    Science.gov (United States)

    Power, William; Wang, Xiaoming; Lane, Emily; Gillibrand, Philip

    2013-09-01

    Regional source tsunamis represent a potentially devastating threat to coastal communities in New Zealand, yet are infrequent events for which little historical information is available. It is therefore essential to develop robust methods for quantitatively estimating the hazards posed, so that effective mitigation measures can be implemented. We develop a probabilistic model for the tsunami hazard posed to the Auckland region of New Zealand from the Kermadec Trench and the southern New Hebrides Trench subduction zones. An innovative feature of our model is the systematic analysis of uncertainty regarding the magnitude-frequency distribution of earthquakes in the source regions. The methodology is first used to estimate the tsunami hazard at the coastline, and then used to produce a set of scenarios that can be applied to produce probabilistic maps of tsunami inundation for the study region; the production of these maps is described in part II. We find that the 2,500 year return period regional source tsunami hazard for the densely populated east coast of Auckland is dominated by events originating in the Kermadec Trench, while the equivalent hazard to the sparsely populated west coast is approximately equally due to events on the Kermadec Trench and the southern New Hebrides Trench.

  1. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    OpenAIRE

    Boyer, Omid; Sai Hong, Tang; Pedram, Ali; Mohd Yusuff, Rosnah Bt; Zulkifli, Norzima

    2013-01-01

    Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including tr...

  2. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  3. Research collaboration, hazard modeling and dissemination in volcanology with Vhub

    Science.gov (United States)

    Palma Lizana, J. L.; Valentine, G. A.

    2011-12-01

    Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University

  4. Functional Model to Estimate the Inelastic Displacement Ratio

    Directory of Open Access Journals (Sweden)

    Ceangu Vlad

    2017-12-01

    Full Text Available In this paper a functional model to estimate the inelastic displacement ratio as a function of the ductility factor is presented. The coefficients of the functional model are approximated using nonlinear regression. The used data is in the form of computed displacement for an inelastic single degree of freedom system with a fixed ductility factor. The inelastic seismic response spectra of constant ductility factors are used for generating data. A method for selecting ground-motions that have similar frequency content to that of the ones picked for the comparison is presented. The variability of the seismic response of nonlinear single degree of freedom systems with different hysteretic behavior is presented.

  5. From deterministic hazard modelling to risk and loss estimation

    NARCIS (Netherlands)

    Quan Luna, B.; van Westen, C.J.; ... [et al.],; Malet, J.-P.; Glade, T.; Casagli, N.

    2010-01-01

    Several steps need to be accomplished for a quantitative landslide risk assessment. First, an analysis of the hazardous process and intensity has to be performed. Afterwards, the physical consequences inflicted by the hazard need to be quantified, preferentially in monetary values. For that purpose,

  6. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  7. Landslide hazard assessment along a mountain highway in the Indian Himalayan Region (IHR) using remote sensing and computational models

    Science.gov (United States)

    Krishna, Akhouri P.; Kumar, Santosh

    2013-10-01

    Landslide hazard assessments using computational models, such as artificial neural network (ANN) and frequency ratio (FR), were carried out covering one of the important mountain highways in the Central Himalaya of Indian Himalayan Region (IHR). Landslide influencing factors were either calculated or extracted from spatial databases including recent remote sensing data of LANDSAT TM, CARTOSAT digital elevation model (DEM) and Tropical Rainfall Measuring Mission (TRMM) satellite for rainfall data. ANN was implemented using the multi-layered feed forward architecture with different input, output and hidden layers. This model based on back propagation algorithm derived weights for all possible parameters of landslides and causative factors considered. The training sites for landslide prone and non-prone areas were identified and verified through details gathered from remote sensing and other sources. Frequency Ratio (FR) models are based on observed relationships between the distribution of landslides and each landslide related factor. FR model implementation proved useful for assessing the spatial relationships between landslide locations and factors contributing to its occurrence. Above computational models generated respective susceptibility maps of landslide hazard for the study area. This further allowed the simulation of landslide hazard maps on a medium scale using GIS platform and remote sensing data. Upon validation and accuracy checks, it was observed that both models produced good results with FR having some edge over ANN based mapping. Such statistical and functional models led to better understanding of relationships between the landslides and preparatory factors as well as ensuring lesser levels of subjectivity compared to qualitative approaches.

  8. A study of the slope of cox proportional hazard and Weibull models ...

    African Journals Online (AJOL)

    However, when the distributional assumptions for Weibull Model is not satisfied, Cox Proportional Hazard Model will be used, although semi-parametric, because it possessed a similar characteristic of covariates inclusion. The main objective of this research work is to determine if the cox proportional hazard model depend ...

  9. A new model dependency ratio for European cities

    Directory of Open Access Journals (Sweden)

    Gianna Zamaro

    2008-09-01

    Full Text Available

    Background: Sometimes referred to as ‘the demographic time bomb,’ the European Union, state and local governments are concerned about the impact of an ageing population on both sustainable economic development and the demand for health and social support services. Seeking to mitigate these pressures, the Organisation for Economic Co-operation and Development (OECD has developed a policy framework Live Longer: Work Longer and the World Health Organization (WHO has set a policy framework for Active Ageing which maintains that early life course interventions can reduce levels of disability and dependency in older age. The WHO European Healthy Cities Network (WHO-EHCN promotes healthy urban planning to encourage healthy lifestyles and maintain older people as a resource in the workplace and to their communities. Our objective is to develop a new model dependency ratio (NMDR for European cities which synthesises these three policy frameworks.

    Methods: Starting from the classic formulation of the dependency ratio (DR, which compares the 'dependent' population segments with the working-age or 'productive' segments, the model is developed in six stages, drawing on data from secondary European and national sources and from primary sources contained in Healthy Ageing Profiles of fifteen (WHO-EHCN cities.

    Results: From an orthodox baseline, the second stage of modelling increases the DR by moving economically inactive people of working age from denominator to numerator. Thereafter, refinements introduced in stages three to six, gradually reduce the DR.

    Conclusions: The NMDR challenges the 'demographic time bomb' predicted by orthodox formulations and can be used as a tool by city decision makers.

  10. Modeling of finite aspect ratio effects on current drive

    International Nuclear Information System (INIS)

    Wright, J.C.; Phillips, C.K.

    1996-01-01

    Most 2D RF modeling codes use a parameterization of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by a full wave code. This eliminates the need to use the approximation inherent in the parameterization. Current profiles are then calculated using the adjoint formulation. This approach has been implemented in the FISIC code. The accuracy of the parameterization of the current drive efficiency, η, is judged by a comparison with a direct calculation: where χ is the adjoint function, ε is the kinetic energy, and rvec Γ is the quasilinear flux. It is shown that for large aspect ratio devices (ε → 0), the parameterization is nearly identical to the direct calculation. As the aspect ratio approaches unity, visible differences between the two calculations appear

  11. Modeling of finite aspect ratio effects on current drive

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J.C.; Phillips, C.K. [Princeton Plasma Physics Lab., NJ (United States)

    1996-12-31

    Most 2D RF modeling codes use a parameterization of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by a full wave code. This eliminates the need to use the approximation inherent in the parameterization. Current profiles are then calculated using the adjoint formulation. This approach has been implemented in the FISIC code. The accuracy of the parameterization of the current drive efficiency, {eta}, is judged by a comparison with a direct calculation: where {chi} is the adjoint function, {epsilon} is the kinetic energy, and {rvec {Gamma}} is the quasilinear flux. It is shown that for large aspect ratio devices ({epsilon} {r_arrow} 0), the parameterization is nearly identical to the direct calculation. As the aspect ratio approaches unity, visible differences between the two calculations appear.

  12. Evaluating the hazard from Siding Spring dust: Models and predictions

    Science.gov (United States)

    Christou, A.

    2014-12-01

    Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.

  13. Comprehensive Modeling of the Effects of Hazardous Asteroids Mitigation Techniques

    Data.gov (United States)

    National Aeronautics and Space Administration — A key challenge for the future of humanity is to develop and understand what technological options are viable for deflecting or mitigating hazardous asteroids. While...

  14. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  15. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  16. Expansion of Collisional Radiative Model for Helium line ratio spectroscopy

    Science.gov (United States)

    Cinquegrani, David; Cooper, Chris; Forest, Cary; Milhone, Jason; Munoz-Borges, Jorge; Schmitz, Oliver; Unterberg, Ezekial

    2015-11-01

    Helium line ratio spectroscopy is a powerful technique of active plasma edge spectroscopy. It enables reconstruction of plasma edge parameters like electron density and temperature by use of suitable Collisional Radiative Models (CRM). An established approach is successful at moderate plasma densities (~1018m-3 range) and temperature (30-300eV), taking recombination and charge exchange to be negligible. The goal of this work is to experimentally explore limitations of this approach to CRM. For basic validation the Madison Plasma Dynamo eXperiment (MPDX) will be used. MPDX offers a very uniform plasma and spherical symmetry at low temperature (5-20 eV) and low density (1016 -1017m-3) . Initial data from MPDX shows a deviation in CRM results when compared to Langmuir probe data. This discrepancy points to the importance of recombination effects. The validated model is applied to first time measurement of electron density and temperature in front of an ICRH antenna at the TEXTOR tokamak. These measurements are important to understand RF coupling and PMI physics at the antenna limiters. Work supported in part by start up funds of the Department of Engineering Physics at the UW - Madison, USA and NSF CAREER award PHY-1455210.

  17. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  18. Zero Information in the Two-Sample Mixed Proportional Hazards Model

    NARCIS (Netherlands)

    Klaassen, Chris A.J.; Lenstra, Andries J.

    2000-01-01

    The mixed proportional hazards model generalizes the Cox model by incorporating a random effect. In the case of two samples, it is chiefly determined by a triple consisting of a number representing the treatment effect, the integrated base-line hazard, and the distribution of the unobserved random

  19. Technology Learning Ratios in Global Energy Models; Ratios de Aprendizaje Tecnologico en Modelos Energeticos Globales

    Energy Technology Data Exchange (ETDEWEB)

    Varela, M.

    2001-07-01

    The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this tend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy systems including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs.

  20. Modelling the costs of natural hazards in games

    Science.gov (United States)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban

  1. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  2. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  3. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  4. Model Modification in Covariance Structure Modeling: A Comparison among Likelihood Ratio, Lagrange Multiplier, and Wald Tests.

    Science.gov (United States)

    Chou, Chih-Ping; Bentler, P. M.

    1990-01-01

    The empirical performance under null/alternative hypotheses of the likelihood ratio difference test (LRDT); Lagrange Multiplier test (evaluating the impact of model modification with a specific model); and Wald test (using a general model) were compared. The new tests for covariance structure analysis performed as well as did the LRDT. (RLC)

  5. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains

    International Nuclear Information System (INIS)

    Kraus, N.N.; Slovic, P.

    1988-01-01

    Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions

  6. A simple GMM estimator for the semi-parametric mixed proportional hazard model

    NARCIS (Netherlands)

    Bijwaard, G.E.; Ridder, G.; Woutersen, T.

    2013-01-01

    Ridder and Woutersen (Ridder, G., and T. Woutersen. 2003. “The Singularity of the Efficiency Bound of the Mixed Proportional Hazard Model.” Econometrica 71: 1579–1589) have shown that under a weak condition on the baseline hazard, there exist root-N consistent estimators of the parameters in a

  7. Comparison of the restricted mean survival time with the hazard ratio in superiority trials with a time-to-event end point.

    Science.gov (United States)

    Huang, Bo; Kuan, Pei-Fen

    2017-12-28

    With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time-to-event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log-rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST-based inference and against the hazard ratio-based inference, under various scenarios and design parameter setups. The log-rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan-Meier survival curves, but the performance of the RMST test is similar. Under non-PH scenarios where late separation of survival curves is observed, the RMST-based test has better performance than the log-rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study

  9. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  10. The 2014 update to the National Seismic Hazard Model in California

    Science.gov (United States)

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  11. Occupational hazard evaluation model underground coal mine based on unascertained measurement theory

    Science.gov (United States)

    Deng, Quanlong; Jiang, Zhongan; Sun, Yaru; Peng, Ya

    2017-05-01

    In order to study how to comprehensively evaluate the influence of several occupational hazard on miners’ physical and mental health, based on unascertained measurement theory, occupational hazard evaluation indicator system was established to make quantitative and qualitative analysis. Determining every indicator weight by information entropy and estimating the occupational hazard level by credible degree recognition criteria, the evaluation model was programmed by Visual Basic, applying the evaluation model to occupational hazard comprehensive evaluation of six posts under a coal mine, and the occupational hazard degree was graded, the evaluation results are consistent with actual situation. The results show that dust and noise is most obvious among the coal mine occupational hazard factors. Excavation face support workers are most affected, secondly, heading machine drivers, coal cutter drivers, coalface move support workers, the occupational hazard degree of these four types workers is II mild level. The occupational hazard degree of ventilation workers and safety inspection workers is I level. The evaluation model could evaluate underground coal mine objectively and accurately, and can be employed to the actual engineering.

  12. Using Bayesian Belief Networks to model volcanic hazards interaction: an application for rain-triggered lahars

    Science.gov (United States)

    Tierz, Pablo; Odbert, Henry; Phillips, Jeremy; Woodhouse, Mark; Sandri, Laura; Selva, Jacopo; Marzocchi, Warner

    2016-04-01

    Quantification of volcanic hazards is a challenging task for modern volcanology. Assessing the large uncertainties involved in the hazard analysis requires the combination of volcanological data, physical and statistical models. This is a complex procedure even when taking into account only one type of volcanic hazard. However, volcanic systems are known to be multi-hazard environments where several hazardous phenomena (tephra fallout, Pyroclastic Density Currents -PDCs-, lahars, etc.) may occur whether simultaneous or sequentially. Bayesian Belief Networks (BBNs) are a flexible and powerful way of modelling uncertainty. They are statistical models that can merge information coming from data, physical models, other statistical models or expert knowledge into a unified probabilistic assessment. Therefore, they can be applied to model the interaction between different volcanic hazards in an efficient manner. In this work, we design and preliminarily parametrize a BBN with the aim of forecasting the occurrence and volume of rain-triggered lahars when considering: (1) input of pyroclastic material, in the form of tephra fallout and PDCs, over the catchments around the volcano; (2) remobilization of this material by antecedent lahar events. Input of fresh pyroclastic material can be modelled through a combination of physical models (e.g. advection-diffusion models for tephra fallout such as HAZMAP and shallow-layer continuum models for PDCs such as Titan2D) and uncertainty quantification techniques, while the remobilization efficiency can be constrained from datasets of lahar observations at different volcanoes. The applications of this kind of probabilistic multi-hazard approach can range from real-time forecasting of lahar activity to calibration of physical or statistical models (e.g. emulators) for long-term volcanic hazard assessment.

  13. Report 3: Guidance document on practices to model and implement Extreme Weather hazards in extended PSA

    International Nuclear Information System (INIS)

    Alzbutas, R.; Ostapchuk, S.; Borysiewicz, M.; Decker, K.; Kumar, Manorma; Haeggstroem, A.; Nitoi, M.; Groudev, P.; Parey, S.; Potempski, S.; Raimond, E.; Siklossy, T.

    2016-01-01

    The goal of this report is to provide guidance on practices to model Extreme Weather hazards and implement them in extended level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the End Users Workshop. This guidance is focusing on extreme weather hazards, namely: extreme wind, extreme temperature and snow pack. Other hazards, however, are considered in cases where they are correlated/ associated with the hazard under discussion. Guidance developed refers to existing guidance whenever possible. As it was recommended by end users this guidance covers questions of developing integrated and/or separated extreme weathers PSA models. (authors)

  14. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    Directory of Open Access Journals (Sweden)

    J. Blahut

    2010-11-01

    Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise

  15. Incorporation of all hazard categories into U.S. NRC PRA models

    International Nuclear Information System (INIS)

    Sancaktar, Selim; Ferrante, Fernando; Siu, Nathan; Coyne, Kevin

    2014-01-01

    Over the last two decades, the U.S. Nuclear Regulatory Commission (NRC) has maintained independent probabilistic risk assessment (PRA) models to calculate nuclear power plant (NPP) core damage frequency (CDF) from internal events at power. These models are known as Standardized Plan Analysis Risk (SPAR) models. There are 79 such models representing 104 domestic nuclear plants; with some SPAR models representing more than one unit on the site. These models allow the NRC risk analysts to perform independent quantitative risk estimates of operational events and degraded plant conditions. It is well recognized that using only the internal events contribution to overall plant risk estimates provides a useful, but limited, assessment of the complete plant risk profile. Inclusion, of all hazard categories applicable to a plant in the plant PRA model would provide a more comprehensive assessment of a plant risk. However, implementation of a more comprehensive treatment of additional hazard categories (e.g., fire, flooding, high winds, seismic) presents a number of challenges, including technical considerations. The U.S. NRC has been incorporating additional hazard categories into its set of nuclear power plant PRA models since 2004. Currently, 18 SPAR models include additional hazard categories such as internal flooding, internal fire, seismic, and wind events. In most cases, these external hazard models were derived from Generic Letter 88-20 Individual Plant Examination of External Events (IPEEE) reports. Recently, NRC started incorporating detailed Fire PRA (FPRA) information based on the current licensing effort that allows licensees to transition into a risk-informed fire protection framework, as well as additional external hazards developed by some licensees into enhanced SPAR models. These updated external hazards SPAR models are referred to as SPAR All-Hazard (SPAR-AHZ) models (i.e., they incorporate additional risk contributors beyond internal events). This paper

  16. Crown ratio models for tropical rainforests species in Oban division ...

    African Journals Online (AJOL)

    Crown ratio (CR) is a characteristic used to describe the crown size, which is an important element of forest growth and yield. It is often used as an important predictor variable for tree-level growth equations. It indicates tree vigour and is an important habitat variable. It is often estimated using allometry. Modified versions of ...

  17. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  18. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Science.gov (United States)

    Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom

    2017-06-01

    We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  19. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Directory of Open Access Journals (Sweden)

    Mark Stirling

    2017-06-01

    Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  20. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  1. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.

    2017-07-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.

  2. A Model Suggestion to Predict Leverage Ratio for Construction Projects

    OpenAIRE

    Özlem Tüz; Şafak Ebesek

    2013-01-01

    Due to the nature, construction is an industry with high uncertainty and risk. Construction industry carries high leverage ratios. Firms with low equities work in big projects through progress payment system, but in this case, even a small negative in the planned cash flows constitute a major risk for the company.The use of leverage, with a small investment to achieve profit targets large-scale, high-profit, but also brings a high risk with it. Investors may lose all or the portion of th...

  3. Snakes as hazards: modelling risk by chasing chimpanzees.

    Science.gov (United States)

    McGrew, William C

    2015-04-01

    Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.

  4. Application of a Cloud Model-Set Pair Analysis in Hazard Assessment for Biomass Gasification Stations.

    Science.gov (United States)

    Yan, Fang; Xu, Kaili

    2017-01-01

    Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific.

  5. Modelling of wander ratios, travel speeds and productivity of cable ...

    African Journals Online (AJOL)

    Multiple regression was also used to develop prediction models for travel speeds loaded and unloaded. The study met its objectives for driving speeds and productivity, and the developed models will be used in a subsequent network analysis to provide solutions to optimise the softwood sawtimber supply chain. The study ...

  6. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    Science.gov (United States)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  7. [Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].

    Science.gov (United States)

    Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang

    2014-02-01

    In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.

  8. Flexible parametric modelling of cause-specific hazards to estimate cumulative incidence functions

    Science.gov (United States)

    2013-01-01

    Background Competing risks are a common occurrence in survival analysis. They arise when a patient is at risk of more than one mutually exclusive event, such as death from different causes, and the occurrence of one of these may prevent any other event from ever happening. Methods There are two main approaches to modelling competing risks: the first is to model the cause-specific hazards and transform these to the cumulative incidence function; the second is to model directly on a transformation of the cumulative incidence function. We focus on the first approach in this paper. This paper advocates the use of the flexible parametric survival model in this competing risk framework. Results An illustrative example on the survival of breast cancer patients has shown that the flexible parametric proportional hazards model has almost perfect agreement with the Cox proportional hazards model. However, the large epidemiological data set used here shows clear evidence of non-proportional hazards. The flexible parametric model is able to adequately account for these through the incorporation of time-dependent effects. Conclusion A key advantage of using this approach is that smooth estimates of both the cause-specific hazard rates and the cumulative incidence functions can be obtained. It is also relatively easy to incorporate time-dependent effects which are commonly seen in epidemiological studies. PMID:23384310

  9. A Thermoacoustic Model for High Aspect Ratio Nanostructures

    Directory of Open Access Journals (Sweden)

    Masoud S. Loeian

    2016-09-01

    Full Text Available In this paper, we have developed a new thermoacoustic model for predicting the resonance frequency and quality factors of one-dimensional (1D nanoresonators. Considering a nanoresonator as a fix-free Bernoulli-Euler cantilever, an analytical model has been developed to show the influence of material and geometrical properties of 1D nanoresonators on their mechanical response without any damping. Diameter and elastic modulus have a direct relationship and length has an inverse relationship on the strain energy and stress at the clamp end of the nanoresonator. A thermoacoustic multiphysics COMSOL model has been elaborated to simulate the frequency response of vibrating 1D nanoresonators in air. The results are an excellent match with experimental data from independently published literature reports, and the results of this model are consistent with the analytical model. Considering the air and thermal damping in the thermoacoustic model, the quality factor of a nanowire has been estimated and the results show that zinc oxide (ZnO and silver-gallium (Ag2Ga nanoresonators are potential candidates as nanoresonators, nanoactuators, and for scanning probe microscopy applications.

  10. Cox Proportional Hazards Models for Modeling the Time to Onset of Decompression Sickness in Hypobaric Environments

    Science.gov (United States)

    Thompson, Laura A.; Chhikara, Raj S.; Conkin, Johnny

    2003-01-01

    In this paper we fit Cox proportional hazards models to a subset of data from the Hypobaric Decompression Sickness Databank. The data bank contains records on the time to decompression sickness (DCS) and venous gas emboli (VGE) for over 130,000 person-exposures to high altitude in chamber tests. The subset we use contains 1,321 records, with 87% censoring, and has the most recent experimental tests on DCS made available from Johnson Space Center. We build on previous analyses of this data set by considering more expanded models and more detailed model assessments specific to the Cox model. Our model - which is stratified on the quartiles of the final ambient pressure at altitude - includes the final ambient pressure at altitude as a nonlinear continuous predictor, the computed tissue partial pressure of nitrogen at altitude, and whether exercise was done at altitude. We conduct various assessments of our model, many of which are recently developed in the statistical literature, and conclude where the model needs improvement. We consider the addition of frailties to the stratified Cox model, but found that no significant gain was attained above a model that does not include frailties. Finally, we validate some of the models that we fit.

  11. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    Science.gov (United States)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  12. Ground motion models used in the 2014 U.S. National Seismic Hazard Maps

    Science.gov (United States)

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.

    2015-01-01

    The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.

  13. Three multimedia models used at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers

  14. tree crown ratio models for tropical rainforests in oban division

    African Journals Online (AJOL)

    DR ADESOPE

    habitat variable. It is often estimated using allometry. Modified versions of Logistics,. Richards, Weibull and Exponential functions were used to predict CR for ..... (ANOVA) were carried out to investigate significant differences in tree growth variables under different canopy layers. The mathematical model for the design is: 9.

  15. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  16. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.

  17. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh

    2014-01-01

    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  18. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  19. Development and Analysis of a Hurricane Hazard Model for Disaster Risk Assessment in Central America

    Science.gov (United States)

    Pita, G. L.; Gunasekera, R.; Ishizawa, O. A.

    2014-12-01

    Hurricane and tropical storm activity in Central America has consistently caused over the past decades thousands of casualties, significant population displacement, and substantial property and infrastructure losses. As a component to estimate future potential losses, we present a new regional probabilistic hurricane hazard model for Central America. Currently, there are very few openly available hurricane hazard models for Central America. This resultant hazard model would be used in conjunction with exposure and vulnerability components as part of a World Bank project to create country disaster risk profiles that will assist to improve risk estimation and provide decision makers with better tools to quantify disaster risk. This paper describes the hazard model methodology which involves the development of a wind field model that simulates the gust speeds at terrain height at a fine resolution. The HURDAT dataset has been used in this study to create synthetic events that assess average hurricane landfall angles and their variability at each location. The hazard model also then estimates the average track angle at multiple geographical locations in order to provide a realistic range of possible hurricane paths that will be used for risk analyses in all the Central-American countries. This probabilistic hurricane hazard model is then also useful for relating synthetic wind estimates to loss and damage data to develop and calibrate existing empirical building vulnerability curves. To assess the accuracy and applicability, modeled results are evaluated against historical events, their tracks and wind fields. Deeper analyses of results are also presented with a special reference to Guatemala. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the

  20. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...

  1. a study of the slope of cox proportional hazard and weibull models

    African Journals Online (AJOL)

    Adejumo & Ahmadu

    Keywords: Cox Proportional Hazard Model, Weibull Model,. Slope, Shape parameters, Scale parameter, Survival time. INTRODUCTION. Survival analysis studies the amount of time that it takes before a particular event, such as death, occurrence of a disease, marriage, divorce, occurs. However, the same techniques can ...

  2. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...

  3. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  4. Measurements and Models for Hazardous chemical and Mixed Wastes

    Energy Technology Data Exchange (ETDEWEB)

    Laurel A. Watts; Cynthia D. Holcomb; Stephanie L. Outcalt; Beverly Louie; Michael E. Mullins; Tony N. Rogers

    2002-08-21

    Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the DOE sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system o water + acetone + 2-propanol + NaNo3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.

  5. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  6. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    Science.gov (United States)

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  7. Measurement and Model for Hazardous Chemical and Mixed Waste

    Energy Technology Data Exchange (ETDEWEB)

    Michael E. Mullins; Tony N. Rogers; Stephanie L. Outcalt; Beverly Louie; Laurel A. Watts; Cynthia D. Holcomb

    2002-07-30

    Mixed solvent aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the United States. Not only does the chemical process industry create large quantities of aqueous waste, but the majority of the waste inventory at the Department of Energy (DOE) sites previously used for nuclear weapons production is mixed solvent aqueous waste. In addition, large quantities of waste are expected to be generated in the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical properties is essential. The goal of this work is to develop a phase equilibrium model for mixed solvent aqueous solutions containing salts. An equation of state was sought for these mixtures that (a) would require a minimum of adjustable parameters and (b) could be obtained from a available data or data that were easily measured. A model was developed to predict vapor composition and pressure given the liquid composition and temperature. It is based on the Peng-Robinson equation of state, adapted to include non-volatile and salt components. The model itself is capable of predicting the vapor-liquid equilibria of a wide variety of systems composed of water, organic solvents, salts, nonvolatile solutes, and acids or bases. The representative system of water + acetone + 2-propanol + NaNO3 was selected to test and verify the model. Vapor-liquid equilibrium and phase density measurements were performed for this system and its constituent binaries.

  8. Medical Modeling of Particle Size Effects for CB Inhalation Hazards

    Science.gov (United States)

    2015-09-01

    warfare) may create adverse health effects when inhaled. Once the materials enter the respiratory tract, they may deposit on the airway surfaces...mppd.htm). New features in this version include a deposition model specifically for nanoparticles, nonuniform lung ventilation to include the effect ... mechanisms cause local lesions, but the more virulent strains may then spread throughout the body via blood or lymph (Celli 2008). The effects of

  9. Large area application of a corn hazard model. [Soviet Union

    Science.gov (United States)

    Ashburn, P.; Taylor, T. W. (Principal Investigator)

    1981-01-01

    An application test of the crop calendar portion of a corn (maize) stress indicator model developed by the early warning, crop condition assessment component of AgRISTARS was performed over the corn for grain producing regions of the U.S.S.R. during the 1980 crop year using real data. Performance of the crop calendar submodel was favorable; efficiency gains in meteorological data analysis time were on a magnitude of 85 to 90 percent.

  10. Evaluation and hydrological modelization in the natural hazard prevention

    International Nuclear Information System (INIS)

    Pla Sentis, Ildefonso

    2011-01-01

    Soil degradation affects negatively his functions as a base to produce food, to regulate the hydrological cycle and the environmental quality. All over the world soil degradation is increasing partly due to lacks or deficiencies in the evaluations of the processes and causes of this degradation on each specific situation. The processes of soil physical degradation are manifested through several problems as compaction, runoff, hydric and Eolic erosion, landslides with collateral effects in situ and in the distance, often with disastrous consequences as foods, landslides, sedimentations, droughts, etc. These processes are frequently associated to unfavorable changes into the hydrologic processes responsible of the water balance and soil hydric regimes, mainly derived to soil use changes and different management practices and climatic changes. The evaluation of these processes using simple simulation models; under several scenarios of climatic change, soil properties and land use and management; would allow to predict the occurrence of this disastrous processes and consequently to select and apply the appropriate practices of soil conservation to eliminate or reduce their effects. This simulation models require, as base, detailed climatic information and hydrologic soil properties data. Despite of the existence of methodologies and commercial equipment (each time more sophisticated and precise) to measure the different physical and hydrological soil properties related with degradation processes, most of them are only applicable under really specific or laboratory conditions. Often indirect methodologies are used, based on relations or empiric indexes without an adequate validation, that often lead to expensive mistakes on the evaluation of soil degradation processes and their effects on natural disasters. It could be preferred simple field methodologies, direct and adaptable to different soil types and climates and to the sample size and the spatial variability of the

  11. Modelling human interactions in the assessment of man-made hazards

    International Nuclear Information System (INIS)

    Nitoi, M.; Farcasiu, M.; Apostol, M.

    2016-01-01

    The human reliability assessment tools are not currently capable to model adequately the human ability to adapt, to innovate and to manage under extreme situations. The paper presents the results obtained by ICN PSA team in the frame of FP7 Advanced Safety Assessment Methodologies: extended PSA (ASAMPSA E ) project regarding the investigation of conducting HRA in human-made hazards. The paper proposes to use a 4-steps methodology for the assessment of human interactions in the external events (Definition and modelling of human interactions; Quantification of human failure events; Recovery analysis; Review). The most relevant factors with respect to HRA for man-made hazards (response execution complexity; existence of procedures with respect to the scenario in question; time available for action; timing of cues; accessibility of equipment; harsh environmental conditions) are presented and discussed thoroughly. The challenges identified in relation to man-made hazards HRA are highlighted. (authors)

  12. An example of debris-flows hazard modeling using GIS

    Directory of Open Access Journals (Sweden)

    L. Melelli

    2004-01-01

    Full Text Available We present a GIS-based model for predicting debris-flows occurrence. The availability of two different digital datasets and the use of a Digital Elevation Model (at a given scale have greatly enhanced our ability to quantify and to analyse the topography in relation to debris-flows. In particular, analysing the relationship between debris-flows and the various causative factors provides new understanding of the mechanisms. We studied the contact zone between the calcareous basement and the fluvial-lacustrine infill adjacent northern area of the Terni basin (Umbria, Italy, and identified eleven basins and corresponding alluvial fans. We suggest that accumulations of colluvium in topographic hollows, whatever the sources might be, should be considered potential debris-flow source areas. In order to develop a susceptibility map for the entire area, an index was calculated from the number of initiation locations in each causative factor unit divided by the areal extent of that unit within the study area. This index identifies those units that produce the most debris-flows in each Representative Elementary Area (REA. Finally, the results are presented with the advantages and the disadvantages of the approach, and the need for further research.

  13. A distribution ratio model of strontium by crown ether extraction from simulated HLLW

    International Nuclear Information System (INIS)

    Chen Jing; Wang Qiuping; Wang Jianchen; Song Chongli

    1995-01-01

    An experiential distribution ratio model for strontium extraction by dicyclohexano-18-crown-6-n-octanol from simulated high-level waste is established. The experimental points for the model are designed by experimental homogeneous-design method. The regression of distribution ratio model of strontium is carried out by the complex-optimization method. The model is verified with experimental distribution ratio data in different extraction conditions. The results show that the relative deviations are within +-10% and the mean relative divination is 4.4% between the calculated data and the experimental ones. The experiential model together with an iteration program can be used for the strontium extraction process calculation

  14. Guidance document on practices to model and implement Earthquake hazards in extended PSA (final version). Volume 1

    International Nuclear Information System (INIS)

    Decker, K.; Hirata, K.; Groudev, P.

    2016-01-01

    The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)

  15. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    Science.gov (United States)

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  17. Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site

    Science.gov (United States)

    Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.

    2012-04-01

    The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological

  18. Flood hazard mapping of Palembang City by using 2D model

    Science.gov (United States)

    Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri

    2017-11-01

    Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.

  19. Model Persamaan Massa Karbon Akar Pohon dan Root-Shoot Ratio Massa Karbon (Equation Models of Tree Root Carbon Mass and Root-Shoot Carbon Mass Ratio

    Directory of Open Access Journals (Sweden)

    Elias .

    2011-03-01

    Full Text Available The case study was conducted in the area of Acacia mangium plantation at BKPH Parung Panjang, KPH Bogor. The objective of the study was to formulate equation models of tree root carbon mass and root to shoot carbon mass ratio of the plantation. It was found that carbon content in the parts of tree biomass (stems, branches, twigs, leaves, and roots was different, in which the highest and the lowest carbon content was in the main stem of the tree and in the leaves, respectively. The main stem and leaves of tree accounted for 70% of tree biomass. The root-shoot ratio of root biomass to tree biomass above the ground and the root-shoot ratio of root biomass to main stem biomass was 0.1443 and 0.25771, respectively, in which 75% of tree carbon mass was in the main stem and roots of tree. It was also found that the root-shoot ratio of root carbon mass to tree carbon mass above the ground and the root-shoot ratio of root carbon mass to tree main stem carbon mass was 0.1442 and 0.2034, respectively. All allometric equation models of tree root carbon mass of A. mangium have a high goodness-of-fit as indicated by its high adjusted R2.Keywords: Acacia mangium, allometric, root-shoot ratio, biomass, carbon mass

  20. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    Science.gov (United States)

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  1. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    Science.gov (United States)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  2. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  3. Mass movement hazard assessment model in the slope profile

    Science.gov (United States)

    Colangelo, A. C.

    2003-04-01

    The central aim of this work is to assess the spatial behaviour of critical depths for slope stability and the behaviour of their correlated variables in the soil-regolith transition along slope profiles over granite, migmatite and mica-schist parent materials in an humid tropical environment. In this way, we had making measures of shear strength for residual soils and regolith materials with soil "Cohron Sheargraph" apparatus and evaluated the shear stress tension behaviour at soil-regolith boundary along slope profiles, in each referred lithology. In the limit equilibrium approach applied here we adapt the infinite slope model for slope analysis in whole slope profile by means of finite element solution like in Fellenius or Bishop methods. In our case, we assume that the potential rupture surface occurs at soil-regolith or soil-rock boundary in slope material. For each slice, the factor of safety was calculated considering the value of shear strength (cohesion and friction) of material, soil-regolith boundary depth, soil moisture level content, slope gradient, top of subsurface flow gradient, apparent soil bulk density. The correlations showed the relative weight of cohesion, internal friction angle, apparent bulk density of soil materials and slope gradient variables with respect to the evaluation of critical depth behaviour for different simulated soil moisture content levels at slope profile scale. Some important results refer to the central role of behaviour of soil bulk-density variable along slope profile during soil evolution and in present day, because the intense clay production, mainly Kaolinite and Gibbsite at B and C-horizons, in the humid tropical environment. A increase in soil clay content produce a fall of friction angle and bulk density of material, specially when some montmorillonite or illite clay are present. We have observed too at threshold conditions, that a slight change in soil bulk-density value may disturb drastically the equilibrium of

  4. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Science.gov (United States)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  5. Occupational Hazards of Fyling Pigs: A Swine model of Hypobaric Induced Neuronal Injury

    Science.gov (United States)

    2017-04-22

    59 MDW/SGVU SUBJECT: Profess ional Presentation Approval 18 APR 20 17 1. Your paper, entitled Occupational Hazards of Flving Pigs : A Swine Model of...RANK1’GRAOE, ’TITlE OF RE \\llEW’ER 45. REVIEWER 31GNAT’IJRE so_ o,o.TE FR-C"\\/.OUS EDlnCNB ARE Ol>SCLETE OCCU PATIONAL HAZARDS OF FLYING PIGS : A SWINE...necessitates intubation or anesthetization of exposure subjects, miniature pigs (Sus scrofa domestica) were repetiti vely exposed to non-hypoxic

  6. Flood Hazard Mapping Combining Hydrodynamic Modeling and Multi Annual Remote Sensing data

    Directory of Open Access Journals (Sweden)

    Laura Giustarini

    2015-10-01

    Full Text Available This paper explores a method to combine the time and space continuity of a large-scale inundation model with discontinuous satellite microwave observations, for high-resolution flood hazard mapping. The assumption behind this approach is that hydraulic variables computed from continuous spatially-distributed hydrodynamic modeling and observed as discrete satellite-derived flood extents are correlated in time, so that probabilities can be transferred from the model series to the observations. A prerequisite is, therefore, the existence of a significant correlation between a modeled variable (i.e., flood extent or volume and the synchronously-observed flood extent. If this is the case, the availability of model simulations over a long time period allows for a robust estimate of non-exceedance probabilities that can be attributed to corresponding synchronously-available satellite observations. The generated flood hazard map has a spatial resolution equal to that of the satellite images, which is higher than that of currently available large scale inundation models. The method was applied on the Severn River (UK, using the outputs of a global inundation model provided by the European Centre for Medium-range Weather Forecasts and a large collection of ENVISAT ASAR imagery. A comparison between the hazard map obtained with the proposed method and with a more traditional numerical modeling approach supports the hypothesis that combining model results and satellite observations could provide advantages for high-resolution flood hazard mapping, provided that a sufficient number of remote sensing images is available and that a time correlation is present between variables derived from a global model and obtained from satellite observations.

  7. Report 2: Guidance document on practices to model and implement external flooding hazards in extended PSA

    International Nuclear Information System (INIS)

    Rebour, V.; Georgescu, G.; Leteinturier, D.; Raimond, E.; La Rovere, S.; Bernadara, P.; Vasseur, D.; Brinkman, H.; Groudev, P.; Ivanov, I.; Turschmann, M.; Sperbeck, S.; Potempski, S.; Hirata, K.; Kumar, Manorma

    2016-01-01

    This report provides a review of existing practices to model and implement external flooding hazards in existing level 1 PSA. The objective is to identify good practices on the modelling of initiating events (internal and external hazards) with a perspective of development of extended PSA and implementation of external events modelling in extended L1 PSA, its limitations/difficulties as far as possible. The views presented in this report are based on the ASAMPSA-E partners' experience and available publications. The report includes discussions on the following issues: - how to structure a L1 PSA for external flooding events, - information needed from geosciences in terms of hazards modelling and to build relevant modelling for PSA, - how to define and model the impact of each flooding event on SSCs with distinction between the flooding protective structures and devices and the effect of protection failures on other SSCs, - how to identify and model the common cause failures in one reactor or between several reactors, - how to apply HRA methodology for external flooding events, - how to credit additional emergency response (post-Fukushima measures like mobile equipment), - how to address the specific issues of L2 PSA, - how to perform and present risk quantification. (authors)

  8. Tornado hazard model with the variation effects of tornado intensity along the path length

    International Nuclear Information System (INIS)

    Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo

    2015-01-01

    Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

  9. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.

    Science.gov (United States)

    Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C

    2016-01-15

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. Copyright © 2015 Ihekwaba et al.

  10. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Omid Boyer

    2013-01-01

    Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.

  11. Geoinformational prognostic model of mudflows hazard and mudflows risk for the territory of Ukrainian Carpathians

    Science.gov (United States)

    Chepurna, Tetiana B.; Kuzmenko, Eduard D.; Chepurnyj, Igor V.

    2017-06-01

    The article is devoted to the geological issue of the space-time regional prognostication of mudflow hazard. The methodology of space-time prediction of mudflows hazard by creating GIS predictive model has been developed. Using GIS technologies the relevant and representative complex of significant influence of spatial and temporal factors, adjusted to use in the regional prediction of mudflows hazard, were selected. Geological, geomorphological, technological, climatic, and landscape factors have been selected as spatial mudflow factors. Spatial analysis is based on detection of a regular connection of spatial factor characteristics with spatial distribution of the mudflow sites. The function of a standard complex spatial index (SCSI) of the probability of the mudflow sites distribution has been calculated. The temporal, long-term prediction of the mudflows activity was based on the hypothesis of the regular reiteration of natural processes. Heliophysical, seismic, meteorological, and hydrogeological factors have been selected as time mudflow factors. The function of a complex index of long standing mudflow activity (CIMA) has been calculated. The prognostic geoinformational model of mudflow hazard up to 2020 year, a year of the next peak of the mudflows activity, has been created. Mudflow risks have been counted and carogram of mudflow risk assessment within the limits of administrative-territorial units has been built for 2020 year.

  12. The Pedestrian Evacuation Analyst: geographic information systems software for modeling hazard evacuation potential

    Science.gov (United States)

    Jones, Jeanne M.; Ng, Peter; Wood, Nathan J.

    2014-01-01

    Recent disasters such as the 2011 Tohoku, Japan, earthquake and tsunami; the 2013 Colorado floods; and the 2014 Oso, Washington, mudslide have raised awareness of catastrophic, sudden-onset hazards that arrive within minutes of the events that trigger them, such as local earthquakes or landslides. Due to the limited amount of time between generation and arrival of sudden-onset hazards, evacuations are typically self-initiated, on foot, and across the landscape (Wood and Schmidtlein, 2012). Although evacuation to naturally occurring high ground may be feasible in some vulnerable communities, evacuation modeling has demonstrated that other communities may require vertical-evacuation structures within a hazard zone, such as berms or buildings, if at-risk individuals are to survive some types of sudden-onset hazards (Wood and Schmidtlein, 2013). Researchers use both static least-cost-distance (LCD) and dynamic agent-based models to assess the pedestrian evacuation potential of vulnerable communities. Although both types of models help to understand the evacuation landscape, LCD models provide a more general overview that is independent of population distributions, which may be difficult to quantify given the dynamic spatial and temporal nature of populations (Wood and Schmidtlein, 2012). Recent LCD efforts related to local tsunami threats have focused on an anisotropic (directionally dependent) path distance modeling approach that incorporates travel directionality, multiple travel speed assumptions, and cost surfaces that reflect variations in slope and land cover (Wood and Schmidtlein, 2012, 2013). The Pedestrian Evacuation Analyst software implements this anisotropic path-distance approach for pedestrian evacuation from sudden-onset hazards, with a particular focus at this time on local tsunami threats. The model estimates evacuation potential based on elevation, direction of movement, land cover, and travel speed and creates a map showing travel times to safety (a

  13. Data Model for Multi Hazard Risk Assessment Spatial Support Decision System

    Science.gov (United States)

    Andrejchenko, Vera; Bakker, Wim; van Westen, Cees

    2014-05-01

    The goal of the CHANGES Spatial Decision Support System is to support end-users in making decisions related to risk reduction measures for areas at risk from multiple hydro-meteorological hazards. The crucial parts in the design of the system are the user requirements, the data model, the data storage and management, and the relationships between the objects in the system. The implementation of the data model is carried out entirely with an open source database management system with a spatial extension. The web application is implemented using open source geospatial technologies with PostGIS as the database, Python for scripting, and Geoserver and javascript libraries for visualization and the client-side user-interface. The model can handle information from different study areas (currently, study areas from France, Romania, Italia and Poland are considered). Furthermore, the data model handles information about administrative units, projects accessible by different types of users, user-defined hazard types (floods, snow avalanches, debris flows, etc.), hazard intensity maps of different return periods, spatial probability maps, elements at risk maps (buildings, land parcels, linear features etc.), economic and population vulnerability information dependent on the hazard type and the type of the element at risk, in the form of vulnerability curves. The system has an inbuilt database of vulnerability curves, but users can also add their own ones. Included in the model is the management of a combination of different scenarios (e.g. related to climate change, land use change or population change) and alternatives (possible risk-reduction measures), as well as data-structures for saving the calculated economic or population loss or exposure per element at risk, aggregation of the loss and exposure using the administrative unit maps, and finally, producing the risk maps. The risk data can be used for cost-benefit analysis (CBA) and multi-criteria evaluation (SMCE). The

  14. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    Science.gov (United States)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  15. The tensor-to-scalar ratio in a G-inflation model

    Directory of Open Access Journals (Sweden)

    LI Ping

    2014-08-01

    Full Text Available We calculate the tensor-to-scalar ratio in a G-inflation in this paper.In our model,we can avoid the behavior that the contribution of higher order is too much to screen the lower order Lagrangian,or vice versa.Every order Lagrangian of Galileons contribute to the result.Choosing a proper parameter,the tensor-to-scalar ratio of our model is little smaller than the ratio of one field Slow-roll inflation.However,this result can fit much better with BICEP.

  16. Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models

    Science.gov (United States)

    Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges

    2016-04-01

    The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model

  17. Cox proportional hazards models have more statistical power than logistic regression models in cross-sectional genetic association studies

    NARCIS (Netherlands)

    van der Net, Jeroen B.; Janssens, A. Cecile J. W.; Eijkemans, Marinus J. C.; Kastelein, John J. P.; Sijbrands, Eric J. G.; Steyerberg, Ewout W.

    2008-01-01

    Cross-sectional genetic association studies can be analyzed using Cox proportional hazards models with age as time scale, if age at onset of disease is known for the cases and age at data collection is known for the controls. We assessed to what degree and under what conditions Cox proportional

  18. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  19. Methodologies, models and parameters for environmental, impact assessment of hazardous and radioactive contaminants

    International Nuclear Information System (INIS)

    Aguero, A.; Cancio, D.; Garcia-Olivares, A.; Romero, L.; Pinedo, P.; Robles, B.; Rodriguez, J.; Simon, I.; Suanez, A.

    2003-01-01

    An Environmental Impact Assessment Methodology to assess the impact arising from contaminants present in hazardous and radioactive wastes has been developed. Taking into account of the background information on legislation, waste categories and contaminants inventory, and disposal, recycling and waste treatment options, an Environmental Impact Assessment Methodology (MEIA) is proposed. This is applicable to (i) several types of solid wastes (hazardous, radioactive and mixed wastes; (ii) several management options (recycling and temporal and final storage (in shallow and deep disposal)), (iii) several levels of data availability. Conceptual and mathematical models and software tools needed for the application of the MEIA have been developed. Bearing in mind that this is a complex process, both the models and tools have to be developed following an iterative approaches, involving refinement of the models and go as to better correspond the described system. The selection of suitable parameters for the models is based on information derived from field and laboratory measurements and experiments, nd then applying a data elicitation protocol.. It is shown an application performed for a hypothetical shallow radioactive waste disposal facility (test case), with all the steps of the MEIA applied sequentially. In addition, the methodology is applied to an actual cases of waste management for hazardous wastes from the coal fuel cycle, demonstrating several possibilities for application of the MEIA from a practical perspective. The experience obtained in the development of the work shows that the use of the MEIA for the assessment of management options for hazardous and radioactive wastes gives important advantages, simplifying the execution of the assessment, its tracability and the dissemination of methodology assessment results to to other interested parties. (Author)

  20. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    Science.gov (United States)

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  1. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    Science.gov (United States)

    Rey, Julien; Beauval, Céline; Douglas, John

    2018-02-01

    Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).

  2. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  3. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  4. ZNJPrice/Earnings Ratio Model through Dividend Yield and Required Yield Above Expected Inflation

    Directory of Open Access Journals (Sweden)

    Emil Mihalina

    2010-07-01

    Full Text Available Price/earnings ratio is the most popular and most widespread evaluation model used to assess relative capital asset value on financial markets. In functional terms, company earnings in the very long term can be described with high significance. Empirically, it is visible from long-term statistics that the demanded (required yield on capital markets has certain regularity. Thus, investors first require a yield above the stable inflation rate and then a dividend yield and a capital increase caused by the growth of earnings that influence the price, with the assumption that the P/E ratio is stable. By combining the Gordon model for current dividend value, the model of market capitalization of earnings (price/earnings ratio and bearing in mind the influence of the general price levels on company earnings, it is possible to adjust the price/earnings ratio by deriving a function of the required yield on capital markets measured by a market index through dividend yield and inflation rate above the stable inflation rate increased by profit growth. The S&P 500 index for example, has in the last 100 years grown by exactly the inflation rate above the stable inflation rate increased by profit growth. The comparison of two series of price/earnings ratios, a modelled one and an average 7-year ratio, shows a notable correlation in the movement of two series of variables, with a three year deviation. Therefore, it could be hypothesized that three years of the expected inflation level, dividend yield and profit growth rate of the market index are discounted in the current market prices. The conclusion is that, at the present time, the relationship between the adjusted average price/earnings ratio and its effect on the market index on one hand and the modelled price/earnings ratio on the other can clearly show the expected dynamics and course in the following period.

  5. Satellite-driven modeling approach for monitoring lava flow hazards during the 2017 Etna eruption

    Science.gov (United States)

    Del Negro, C.; Bilotta, G.; Cappello, A.; Ganci, G.; Herault, A.; Zago, V.

    2017-12-01

    The integration of satellite data and modeling represents an efficient strategy that may provide immediate answers to the main issues raised at the onset of a new effusive eruption. Satellite-based thermal remote sensing of hotspots related to effusive activity can effectively provide a variety of products suited to timing, locating, and tracking the radiant character of lava flows. Hotspots show the location and occurrence of eruptive events (vents). Discharge rate estimates may indicate the current intensity (effusion rate) and potential magnitude (volume). High-spatial resolution multispectral satellite data can complement field observations for monitoring the front position (length) and extension of flows (area). Physics-based models driven, or validated, by satellite-derived parameters are now capable of fast and accurate forecast of lava flow inundation scenarios (hazard). Here, we demonstrate the potential of the integrated application of satellite remote-sensing techniques and lava flow models during the 2017 effusive eruption at Mount Etna in Italy. This combined approach provided insights into lava flow field evolution by supplying detailed views of flow field construction (e.g., the opening of ephemeral vents) that were useful for more accurate and reliable forecasts of eruptive activity. Moreover, we gave a detailed chronology of the lava flow activity based on field observations and satellite images, assessed the potential extent of impacted areas, mapped the evolution of lava flow field, and executed hazard projections. The underside of this combination is the high sensitivity of lava flow inundation scenarios to uncertainties in vent location, discharge rate, and other parameters, which can make interpreting hazard forecasts difficult during an effusive crisis. However, such integration at last makes timely forecasts of lava flow hazards during effusive crises possible at the great majority of volcanoes for which no monitoring exists.

  6. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  7. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    Science.gov (United States)

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

  8. Polytomous IRT models and monotone likelihood ratio of the total score

    NARCIS (Netherlands)

    Hemker, BT; Sijtsma, Klaas; Molenaar, Ivo W; Junker, BW

    1996-01-01

    In a broad class of item response theory (IRT) models for dichotomous items the unweighted total score has monotone likelihood ratio (MLR) in the latent trait theta. In this study, it is shown that for polytomous items MLR holds for the partial credit model and a trivial generalization of this

  9. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    Science.gov (United States)

    Nicolae Lerma, Alexandre; Bulteau, Thomas; Elineau, Sylvain; Paris, François; Durand, Paul; Anselme, Brice; Pedreros, Rodrigo

    2018-01-01

    A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour) are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1) maximum spatial extent of flooded areas, (2) volumes of water propagation inland and (3) water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period) can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean sea level rise (0.6 m), hazard

  10. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    Directory of Open Access Journals (Sweden)

    A. Nicolae Lerma

    2018-01-01

    Full Text Available A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1 maximum spatial extent of flooded areas, (2 volumes of water propagation inland and (3 water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean

  11. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  12. A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.

    Science.gov (United States)

    Hardy, Lorna; Hogarth, Lee

    2017-12-01

    This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p mood (p = .54, ηp2 = .008). The negative mood-induced increase in alcohol choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Conditional Akaike information under generalized linear and proportional hazards mixed models

    Science.gov (United States)

    Donohue, M. C.; Overholser, R.; Xu, R.; Vaida, F.

    2011-01-01

    We study model selection for clustered data, when the focus is on cluster specific inference. Such data are often modelled using random effects, and conditional Akaike information was proposed in Vaida & Blanchard (2005) and used to derive an information criterion under linear mixed models. Here we extend the approach to generalized linear and proportional hazards mixed models. Outside the normal linear mixed models, exact calculations are not available and we resort to asymptotic approximations. In the presence of nuisance parameters, a profile conditional Akaike information is proposed. Bootstrap methods are considered for their potential advantage in finite samples. Simulations show that the performance of the bootstrap and the analytic criteria are comparable, with bootstrap demonstrating some advantages for larger cluster sizes. The proposed criteria are applied to two cancer datasets to select models when the cluster-specific inference is of interest. PMID:22822261

  14. A Temperature-dependent Model of Ratio of Specific Heats Applying in Diesel Engine

    OpenAIRE

    Li, shangming

    2017-01-01

    Rate of heat release is a standard tool when engineers tune and develop new engines. The ratio of specific heats $\\gamma$ is considered an essential parameter for achieving accurate rate of heat release calculation as it couples the engine system energy and other thermodynamic properties.The $\\gamma$ model is a function of various factors, such as temperature, air-fuel-ratio, pressure, etc. To improve the accuracy of ROHR calculation in the Scania diesel engine in NTNU machinery laboratory s...

  15. Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models

    Science.gov (United States)

    Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria

    2011-08-01

    The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.

  16. Recent developments in health risks modeling techniques applied to hazardous waste site assessment and remediation

    International Nuclear Information System (INIS)

    Mendez, W.M. Jr.

    1990-01-01

    Remediation of hazardous an mixed waste sites is often driven by assessments of human health risks posed by the exposures to hazardous substances released from these sites. The methods used to assess potential health risk involve, either implicitly or explicitly, models for pollutant releases, transport, human exposure and intake, and for characterizing health effects. Because knowledge about pollutant fate transport processes at most waste sites is quite limited, and data cost are quite high, most of the models currently used to assess risk, and endorsed by regulatory agencies, are quite simple. The models employ many simplifying assumptions about pollutant fate and distribution in the environment about human pollutant intake, and toxicologic responses to pollutant exposures. An important consequence of data scarcity and model simplification is that risk estimates are quite uncertain and estimates of the magnitude uncertainty associated with risk assessment has been very difficult. A number of methods have been developed to address the issue of uncertainty in risk assessments in a manner that realistically reflects uncertainty in model specification and data limitations. These methods include definition of multiple exposure scenarios, sensitivity analyses, and explicit probabilistic modeling of uncertainty. Recent developments in this area will be discussed, along with their possible impacts on remediation programs, and remaining obstacles to their wider use and acceptance by the scientific and regulatory communities

  17. QSAR modeling of cumulative environmental end-points for the prioritization of hazardous chemicals.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Sangion, Alessandro

    2018-01-24

    The hazard of chemicals in the environment is inherently related to the molecular structure and derives simultaneously from various chemical properties/activities/reactivities. Models based on Quantitative Structure Activity Relationships (QSARs) are useful to screen, rank and prioritize chemicals that may have an adverse impact on humans and the environment. This paper reviews a selection of QSAR models (based on theoretical molecular descriptors) developed for cumulative multivariate endpoints, which were derived by mathematical combination of multiple effects and properties. The cumulative end-points provide an integrated holistic point of view to address environmentally relevant properties of chemicals.

  18. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    International Nuclear Information System (INIS)

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs

  19. Obtaining adjusted prevalence ratios from logistic regression models in cross-sectional studies.

    Science.gov (United States)

    Bastos, Leonardo Soares; Oliveira, Raquel de Vasconcellos Carvalhaes de; Velasque, Luciane de Souza

    2015-03-01

    In the last decades, the use of the epidemiological prevalence ratio (PR) instead of the odds ratio has been debated as a measure of association in cross-sectional studies. This article addresses the main difficulties in the use of statistical models for the calculation of PR: convergence problems, availability of tools and inappropriate assumptions. We implement the direct approach to estimate the PR from binary regression models based on two methods proposed by Wilcosky & Chambless and compare with different methods. We used three examples and compared the crude and adjusted estimate of PR, with the estimates obtained by use of log-binomial, Poisson regression and the prevalence odds ratio (POR). PRs obtained from the direct approach resulted in values close enough to those obtained by log-binomial and Poisson, while the POR overestimated the PR. The model implemented here showed the following advantages: no numerical instability; assumes adequate probability distribution and, is available through the R statistical package.

  20. Theoretical quasar emission-line ratios. VII - Energy-balance models for finite hydrogen slabs

    Science.gov (United States)

    Hubbard, E. N.; Puetter, R. C.

    1985-01-01

    The present energy balance calculations for finite, isobaric, hydrogen-slab quasar emission line clouds incorporate probabilistic radiative transfer (RT) in all lines and bound-free continua of a five-level continuum model hydrogen atom. Attention is given to the line ratios, line formation regions, level populations and model applicability results obtained. H lines and a variety of other considerations suggest the possibility of emission line cloud densities in excess of 10 to the 10th/cu cm. Lyman-beta/Lyman-alpha line ratios that are in agreement with observed values are obtained by the models. The observed Lyman/Balmer ratios can be achieved with clouds whose column depths are about 10 to the 22nd/sq cm.

  1. Planar seismic source characterization models developed for probabilistic seismic hazard assessment of Istanbul

    Directory of Open Access Journals (Sweden)

    Z. Gülerce

    2017-12-01

    Full Text Available This contribution provides an updated planar seismic source characterization (SSC model to be used in the probabilistic seismic hazard assessment (PSHA for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.

  2. Use of agent-based modelling in emergency management under a range of flood hazards

    Directory of Open Access Journals (Sweden)

    Tagg Andrew

    2016-01-01

    Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.

  3. Planar seismic source characterization models developed for probabilistic seismic hazard assessment of Istanbul

    Science.gov (United States)

    Gülerce, Zeynep; Buğra Soyman, Kadir; Güner, Barış; Kaymakci, Nuretdin

    2017-12-01

    This contribution provides an updated planar seismic source characterization (SSC) model to be used in the probabilistic seismic hazard assessment (PSHA) for Istanbul. It defines planar rupture systems for the four main segments of the North Anatolian fault zone (NAFZ) that are critical for the PSHA of Istanbul: segments covering the rupture zones of the 1999 Kocaeli and Düzce earthquakes, central Marmara, and Ganos/Saros segments. In each rupture system, the source geometry is defined in terms of fault length, fault width, fault plane attitude, and segmentation points. Activity rates and the magnitude recurrence models for each rupture system are established by considering geological and geodetic constraints and are tested based on the observed seismicity that is associated with the rupture system. Uncertainty in the SSC model parameters (e.g., b value, maximum magnitude, slip rate, weights of the rupture scenarios) is considered, whereas the uncertainty in the fault geometry is not included in the logic tree. To acknowledge the effect of earthquakes that are not associated with the defined rupture systems on the hazard, a background zone is introduced and the seismicity rates in the background zone are calculated using smoothed-seismicity approach. The state-of-the-art SSC model presented here is the first fully documented and ready-to-use fault-based SSC model developed for the PSHA of Istanbul.

  4. A Simple Model for Probabilistic Seismic Hazard Analysis of Induced Seismicity Associated With Deep Geothermal Systems

    Science.gov (United States)

    Schlittenhardt, Joerg; Spies, Thomas; Kopera, Juergen; Morales Aviles, Wilhelm

    2014-05-01

    In the research project MAGS (Microseismic activity of geothermal systems) funded by the German Federal Ministry of Environment (BMU) a simple model was developed to determine seismic hazard as the probability of the exceedance of ground motion of a certain size. Such estimates of the annual frequency of exceedance of prescriptive limits of e.g. seismic intensities or ground motions are needed for the planning and licensing, but likewise for the development and operation of deep geothermal systems. For the development of the proposed model well established probabilistic seismic hazard analysis (PSHA) methods for the estimation of the hazard for the case of natural seismicity were adapted to the case of induced seismicity. Important differences between induced and natural seismicity had to be considered. These include significantly smaller magnitudes, depths and source to site distances of the seismic events and, hence, different ground motion prediction equations (GMPE) that had to be incorporated to account for the seismic amplitude attenuation with distance as well as differences in the stationarity of the underlying tectonic and induced processes. Appropriate GMPE's in terms of PGV (peak ground velocity) were tested and selected from the literature. The proposed model and its application to the case of induced seismicity observed during the circulation period (operation phase of the plant) at geothermal sites in Germany will be presented. Using GMPE's for PGV has the advantage to estimate hazard in terms of velocities of ground motion, which can be linked to engineering regulations (e.g. German DIN 4150) which give prescriptive standards for the effects of vibrations on buildings and people. It is thus possible to specify the probability of exceedance of such prescriptive standard values and to decide whether they can be accepted or not. On the other hand hazard curves for induced and natural seismicity can be compared to study the impact at a site. Preliminary

  5. A transparent and data-driven global tectonic regionalisation model for seismic hazard assessment

    Science.gov (United States)

    Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice

    2018-01-01

    A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognises that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalisation, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalisation process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalisation model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) indicate the degree to which a site belongs in a tectonic category.

  6. An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects

    Science.gov (United States)

    Brown, Cliff

    2015-01-01

    An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.

  7. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  8. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    Science.gov (United States)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an

  9. Tsunami-hazard assessment based on subaquatic slope-failure susceptibility and tsunami-inundation modeling

    Science.gov (United States)

    Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael

    2015-04-01

    Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of

  10. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  11. Mating behavior, population growth, and the operational sex ratio: a periodic two-sex model approach.

    Science.gov (United States)

    Jenouvrier, Stéphanie; Caswell, Hal; Barbraud, Christophe; Weimerskirch, Henri

    2010-06-01

    We present a new approach to modeling two-sex populations, using periodic, nonlinear two-sex matrix models. The models project the population growth rate, the population structure, and any ratio of interest (e.g., operational sex ratio). The periodic formulation permits inclusion of highly seasonal behavioral events. A periodic product of the seasonal matrices describes annual population dynamics. The model is nonlinear because mating probability depends on the structure of the population. To study how the vital rates influence population growth rate, population structure, and operational sex ratio, we used sensitivity analysis of frequency-dependent nonlinear models. In nonlinear two-sex models the vital rates affect growth rate directly and also indirectly through effects on the population structure. The indirect effects can sometimes overwhelm the direct effects and are revealed only by nonlinear analysis. We find that the sensitivity of the population growth rate to female survival is negative for the emperor penguin, a species with highly seasonal breeding behavior. This result could not occur in linear models because changes in population structure have no effect on per capita reproduction. Our approach is applicable to ecological and evolutionary studies of any species in which males and females interact in a seasonal environment.

  12. ANALYSIS OF MULTIVARIATE FAILURE TIME DATA USING MARGINAL PROPORTIONAL HAZARDS MODEL.

    Science.gov (United States)

    Chen, Ying; Chen, Kani; Ying, Zhiliang

    2010-01-01

    The marginal proportional hazards model is an important tool in the analysis of multivariate failure time data in the presence of censoring. We propose a method of estimation via the linear combinations of martingale residuals. The estimation and inference procedures are easy to implement numerically. The estimation is generally more accurate than the existing pseudo-likelihood approach: the size of efficiency gain can be considerable in some cases, and the maximum relative efficiency in theory is infinite. Consistency and asymptotic normality are established. Empirical evidence in support of the theoretical claims is shown in simulation studies.

  13. Estimation of direct effects for survival data by using the Aalen additive hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Gerster, Mette

    2011-01-01

    Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first......We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...

  14. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma......- and stable frailty distribution: The first study is on adoption data where the association between survival in families of adopted children and their adoptive and biological parents is studied. The second study is a cross-sectional study of the occurrence of back and neck pain in twins, illustrating...

  15. Validation of a 30m resolution flood hazard model of the conterminous United States

    Science.gov (United States)

    Sampson, C. C.; Wing, O.; Smith, A.; Bates, P. D.; Neal, J. C.

    2017-12-01

    We present a 30m resolution two-dimensional hydrodynamic model of the entire conterminous US that has been used to simulate continent-wide flood extent for ten return periods. The model uses a highly efficient numerical solution of the shallow water equations to simulate fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. We use the US National Elevation Dataset (NED) to determine topography for the model and the US Army Corps of Engineers National Levee Dataset to explicitly represent known flood defences. Return period flows and rainfall intensities are estimated using regionalized frequency analyses. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area maps. We also compare the results obtained from the NED-based continental model with results from a 90m resolution global hydraulic model built using SRTM terrain and identical boundary conditions. Where the FEMA Special Flood Hazard Areas are based on high quality local models the NED-based continental scale model attains a Hit Rate of 86% and a Critical Success Index (CSI) of 0.59; both are typical of scores achieved when comparing high quality reach-scale models to observed event data. The NED model also consistently outperformed the coarser SRTM model. The correspondence between the continental model and FEMA improves in temperate areas and for basins above 400 km2. Given typical hydraulic modeling uncertainties in the FEMA maps, it is probable that the continental-scale model can replicate them to within error. The continental model covers the entire continental US, compared to only 61% for FEMA, and also maps flooding in smaller watersheds not included in the FEMA coverage. The simulations were performed using computing hardware costing less than 100k, whereas the FEMA flood layers are built from thousands of individual local studies that took several decades to develop at an estimated cost (up

  16. Clinical trials: odds ratios and multiple regression models--why and how to assess them

    NARCIS (Netherlands)

    Sobh, Mohamad; Cleophas, Ton J.; Hadj-Chaib, Amel; Zwinderman, Aeilko H.

    2008-01-01

    Odds ratios (ORs), unlike chi2 tests, provide direct insight into the strength of the relationship between treatment modalities and treatment effects. Multiple regression models can reduce the data spread due to certain patient characteristics and thus improve the precision of the treatment

  17. Statistical power of likelihood ratio and Wald tests in latent class models with covariates

    NARCIS (Netherlands)

    Gudicha, D.W.; Schmittmann, V.D.; Vermunt, J.K.

    2017-01-01

    This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null

  18. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  19. Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report

    International Nuclear Information System (INIS)

    Holcomb, C.; Louie, B.; Mullins, M.E.; Outcalt, S.L.; Rogers, T.N.; Watts, L.

    1998-01-01

    'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'

  20. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  1. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    OpenAIRE

    Jiang, Yingjun; Wong, Louis Ngai Yuen; Ren, Jiaolong

    2015-01-01

    In order to better understand the mechanical properties of graded crushed rocks (GCRs) and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR) test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical resu...

  2. Hazard-consistent ground motions generated with a stochastic fault-rupture model

    Energy Technology Data Exchange (ETDEWEB)

    Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-12-15

    Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to

  3. System hazards in managing laboratory test requests and results in primary care: medical protection database analysis and conceptual model.

    Science.gov (United States)

    Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John

    2015-11-27

    To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. A retrospective database analysis. General practices in the UK and Ireland. 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the 'postanalytical test stage' (n=702, 43.8%), followed closely by 'communication outcomes issues' (n=628, 39.1%). Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. CalTOX, a multimedia total exposure model for hazardous-waste sites

    International Nuclear Information System (INIS)

    McKone, T.E.

    1993-06-01

    CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population

  5. A Test of Carbon and Oxygen Stable Isotope Ratio Process Models in Tree Rings.

    Science.gov (United States)

    Roden, J. S.; Farquhar, G. D.

    2008-12-01

    Stable isotopes ratios of carbon and oxygen in tree ring cellulose have been used to infer environmental change. Process-based models have been developed to clarify the potential of historic tree ring records for meaningful paleoclimatic reconstructions. However, isotopic variation can be influenced by multiple environmental factors making simplistic interpretations problematic. Recently, the dual isotope approach, where the variation in one stable isotope ratio (e.g. oxygen) is used to constrain the interpretation of variation in another (e.g. carbon), has been shown to have the potential to de-convolute isotopic analysis. However, this approach requires further testing to determine its applicability for paleo-reconstructions using tree-ring time series. We present a study where the information needed to parameterize mechanistic models for both carbon and oxygen stable isotope ratios were collected in controlled environment chambers for two species (Pinus radiata and Eucalyptus globulus). The seedlings were exposed to treatments designed to modify leaf temperature, transpiration rates, stomatal conductance and photosynthetic capacity. Both species were grown for over 100 days under two humidity regimes that differed by 20%. Stomatal conductance was significantly different between species and for seedlings under drought conditions but not between other treatments or humidity regimes. The treatments produced large differences in transpiration rate and photosynthesis. Treatments that effected photosynthetic rates but not stomatal conductance influenced carbon isotope discrimination more than those that influenced primarily conductance. The various treatments produced a range in oxygen isotope ratios of 7 ‰. Process models predicted greater oxygen isotope enrichment in tree ring cellulose than observed. The oxygen isotope ratios of bulk leaf water were reasonably well predicted by current steady-state models. However, the fractional difference between models that

  6. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    Science.gov (United States)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  7. MODELING AND FORECASTING THE GROSS ENROLLMENT RATIO IN ROMANIAN PRIMARY SCHOOL

    Directory of Open Access Journals (Sweden)

    MARINOIU CRISTIAN

    2014-06-01

    Full Text Available The gross enrollment ratio in primary school is one of the basic indicators used in order to evaluate the proposed objectives of the educational system. Knowing its evolution allows a more rigorous substantiation of the strategies and of the human resources politics not only from the educational field but also from the economic one. In this paper we propose an econometric model in order to describe the gross enrollment ratio in Romanian primary school and we achieve its prediction for the next years, having as a guide the Box-Jenkins’s methodology. The obtained results indicate the continuous decrease of this rate for the next years.

  8. Cosmic ray muon charge ratio derived from the new scaling variable model

    CERN Document Server

    Bhattacharya, D P

    1980-01-01

    The charge ratio of sea level muons has been estimated from the new scaling variable model and the CERN Intersecting Storage Ring data of Capiluppi et al. (1974) for pp to pi /sup +or-/X and pp to K/sup +or- /X inclusive reactions. The estimated muon charge ratio is found to be 1.21 and the result has been compared with the experimental data of Parker et al. (1969), Burnet et al. (1973), Ashley et al., and Muraki et al. (1979). (20 refs).

  9. Active fault characterization throughout the Caribbean and Central America for seismic hazard modeling

    Science.gov (United States)

    Styron, Richard; Pagani, Marco; Garcia, Julio

    2017-04-01

    The region encompassing Central America and the Caribbean is tectonically complex, defined by the Caribbean plate's interactions with the North American, South American and Cocos plates. Though active deformation over much of the region has received at least cursory investigation the past 50 years, the area is chronically understudied and lacks a modern, synoptic characterization. Regardless, the level of risk in the region - as dramatically demonstrated by the 2010 Haiti earthquake - remains high because of high-vulnerability buildings and dense urban areas home to over 100 million people, who are concentrated near plate boundaries and other major structures. As part of a broader program to study seismic hazard worldwide, the Global Earthquake Model Foundation is currently working to quantify seismic hazard in the region. To this end, we are compiling a database of active faults throughout the region that will be integrated into similar models as recently done in South America. Our initial compilation hosts about 180 fault traces in the region. The faults show a wide range of characteristics, reflecting the diverse styles of plate boundary and plate-margin deformation observed. Regional deformation ranges from highly localized faulting along well-defined strike-slip faults to broad zones of distributed normal or thrust faulting, and from readily-observable yet slowly-slipping structures to inferred faults with geodetically-measured slip rates >10 mm/yr but essentially no geomorphic expression. Furthermore, primary structures such as the Motagua-Polochic Fault Zone (the strike-slip plate boundary between the North American and Caribbean plates in Guatemala) display strong along-strike slip rate gradients, and many other structures are undersea for most or all of their length. A thorough assessment of seismic hazard in the region will require the integration of a range of datasets and techniques and a comprehensive characterization of epistemic uncertainties driving

  10. In Situ Measurements of the NO2/NO Ratio for Testing Atmospheric Photochemical Models

    Science.gov (United States)

    Jaegle, L.; Webster, C. R.; May, R. D.; Fahey, D. W.; Woodbridge, E. L.; Keim, E. R.; Gao, R. S.; Proffitt, M. H.; Stimpfle, R. M.; Salawitch, R. J.

    1994-01-01

    Simultaneous in situ measurements of NO2, NO, O3, ClO, pressure and temperature have been made for the first time, presenting a unique opportunity to test our current understanding of the photochemistry of the lower stratosphere. Data were collected from several flights of the ER-2 aircraft at mid-latitudes in May 1993 during NASA's Stratospheric Photochemistry, Aerosols and Dynamics Expedition (SPADE). The daytime ratio of NO2/NO remains fairly constant at 19 km with a typical value of 0.68 and standard deviation of +/- 0.17. The ratio observations are compared with simple steady-state calculations based on laboratory-measured reaction rates and modeled NO2 photolysis rates. At each measurement point the daytime NO2/NO with its measurement uncertainty overlap the results of steady-state calculations and associated uncertainty. However, over all the ER-2 flights examined, the model systematically overestimates the ratio by 40% on average. Possible sources of error are examined in both model and measurements. It is shown that more accurate laboratory determinations of the NO + O3 reaction rate and of the NO2 cross-sections in the 200-220 K temperature range characteristic of the lower stratosphere would allow for a more robust test of our knowledge of NO(x) photochemistry by reducing significant sources of uncertainties in the interpretation of stratospheric measurements. The present measurements are compared with earlier observations of the ratio at higher altitudes.

  11. A Three End-Member Mixing Model Based on Isotopic Composition and Elemental Ratio

    Directory of Open Access Journals (Sweden)

    Kon-Kee Liu Shuh-Ji Kao

    2007-01-01

    Full Text Available A three end-member mixing model based on nitrogen isotopic composition and organic carbon to nitrogen ratio of suspended particulate matter in an aquatic environment has been developed. Mathematical expressions have been derived for the calculation of the fractions of nitrogen or organic carbon originating from three different sources of distinct isotopic and elemental compositions. The model was successfully applied to determine the contributions from anthropogenic wastes, soils and bedrock-derived sediments to particulate nitrogen and particulate organic carbon in the Danshuei River during the flood caused by Typhoon Bilis in August 2000. The model solutions have been expressed in a general form that allows applications to mixtures with other types of isotopic compositions and elemental ratios or in forms other than suspended particulate matter.

  12. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    Science.gov (United States)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  13. Perspectives of widely scalable exposure models for multi-hazard global risk assessment

    Science.gov (United States)

    Pittore, Massimiliano; Haas, Michael; Wieland, Marc

    2017-04-01

    Less than 5% of earth's surface is urbanized, and currently hosts around 7.5 billion people, with these figures constantly changing as increasingly faster urbanization takes place. A significant percentage of this population, often in economically developing countries, is exposed to different natural hazards which contribute to further raise the bar on the expected economic and social consequences. Global initiatives such as GAR 15 advocate for a wide scale, possibly global perspective on the assessment of risk arising from natural hazards, as a way to increase the risk-awareness of decision-makers and stakeholders, and to better harmonize large-scale prevention and mitigation actions. Realizing, and even more importantly maintaining a widely-scalable exposure model suited for the assessment of different natural risks would allow large-scale quantitative risk and loss assessment in a more efficient and reliable way. Considering its complexity and extent, such a task is undoubtedly a challenging one, spanning across multiple disciplines and operational contexts. On the other hand, with a careful design and an efficient and scalable implementation such endeavour would be well within reach and would contribute to significantly improve our understanding of the mechanisms lying behind what we call natural catastrophes. In this contribution we'll review existing relevant applications, will discuss how to tackle the most critical issues and will outline a road map for the implementation of global-scoped exposure models.

  14. Household hazardous waste disposal to landfill: Using LandSim to model leachate migration

    International Nuclear Information System (INIS)

    Slack, Rebecca J.; Gronow, Jan R.; Hall, David H.; Voulvoulis, Nikolaos

    2007-01-01

    Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW. - Aquatic pollutants linked to the disposal of household hazardous waste in municipal landfills have the potential to exist in soil and groundwater for many years

  15. A "mental models" approach to the communication of subsurface hydrology and hazards

    Science.gov (United States)

    Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison

    2016-05-01

    Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.

  16. Application of an Individual-Based Transmission Hazard Model for Estimation of Influenza Vaccine Effectiveness in a Household Cohort.

    Science.gov (United States)

    Petrie, Joshua G; Eisenberg, Marisa C; Ng, Sophia; Malosh, Ryan E; Lee, Kyu Han; Ohmit, Suzanne E; Monto, Arnold S

    2017-12-15

    Household cohort studies are an important design for the study of respiratory virus transmission. Inferences from these studies can be improved through the use of mechanistic models to account for household structure and risk as an alternative to traditional regression models. We adapted a previously described individual-based transmission hazard (TH) model and assessed its utility for analyzing data from a household cohort maintained in part for study of influenza vaccine effectiveness (VE). Households with ≥4 individuals, including ≥2 children hazards (PH) models. For each individual, TH models estimated hazards of infection from the community and each infected household contact. Influenza A(H3N2) infection was laboratory-confirmed in 58 (4%) subjects. VE estimates from both models were similarly low overall (Cox PH: 20%, 95% confidence interval: -57, 59; TH: 27%, 95% credible interval: -23, 58) and highest for children Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Constraints on the tensor-to-scalar ratio for non-power-law models

    International Nuclear Information System (INIS)

    Vázquez, J. Alberto; Bridges, M.; Ma, Yin-Zhe; Hobson, M.P.

    2013-01-01

    Recent cosmological observations hint at a deviation from the simple power-law form of the primordial spectrum of curvature perturbations. In this paper we show that in the presence of a tensor component, a turn-over in the initial spectrum is preferred by current observations, and hence non-power-law models ought to be considered. For instance, for a power-law parameterisation with both a tensor component and running parameter, current data show a preference for a negative running at more than 2.5σ C.L. As a consequence of this deviation from a power-law, constraints on the tensor-to-scalar ratio r are slightly broader. We also present constraints on the inflationary parameters for a model-independent reconstruction and the Lasenby and Doran (LD) model. In particular, the constraints on the tensor-to-scalar ratio from the LD model are: r LD = 0.11±0.024. In addition to current data, we show expected constraints from Planck-like and CMB-Pol sensitivity experiments by using Markov-Chain-Monte-Carlo sampling chains. For all the models, we have included the Bayesian Evidence to perform a model selection analysis. The Bayes factor, using current observations, shows a strong preference for the LD model over the standard power-law parameterisation, and provides an insight into the accuracy of differentiating models through future surveys

  18. Combining Machine Learning and Mesoscale Modeling for Atmospheric Releases Hazard Assessment

    Science.gov (United States)

    Cervone, G.; Franzese, P.; Ezber, Y.; Boybeyi, Z.

    2007-12-01

    In applications such as homeland security and hazards response, it is necessary to know in real time which areas are most at risk from a potentially harmful atmospheric pollutant. Using high resolution remote sensing measurements and atmospheric mesoscale numerical models, it is possible to detect and study the transport and dispersion of particles with great accuracy, and to determine the ground concentrations which might pose a threat to people and properties. Satellite observations from different sensors must be fused together to compensate for different spatial, temporal and spectral resolutions and data availability. Such observations are used to initialize and validate atmospheric mesoscale models, which can provide accurate estimates of ground concentrations. Such numerical models are, however, usually slow due to the complex nature of the computations, and do not provide real time answers. We will define probability maps of risks by running several atmospheric mesoscale and T&D simulations spanning the climatological input conditions of an entire year, observed using high resolution remote sensing instruments. Such maps provide an immediate risk assessment area associated with a given source location. If a release indeed occurs, the computed risk maps can be used for first assessment and rapid response. We analyze the output of the mesoscale model runs using machine learning algorithms to find characteristic patterns which relate potential risk areas with atmospheric parameters which can be observed using remote sensing instruments and ground measurements. Therefore, when a release occurs, it is possible to give a quick hazard assessment without running a time consuming model, but by comparing the current atmospheric conditions with those associated with each identified risk area. The offline learning provides knowledge that can later be used to protect people and properties.

  19. A prediction model for wind speed ratios at pedestrian level with simplified urban canopies

    Science.gov (United States)

    Ikegaya, N.; Ikeda, Y.; Hagishima, A.; Razak, A. A.; Tanimoto, J.

    2017-02-01

    The purpose of this study is to review and improve prediction models for wind speed ratios at pedestrian level with simplified urban canopies. We adopted an extensive database of velocity fields under various conditions for arrays consisting of cubes, slender or flattened rectangles, and rectangles with varying roughness heights. Conclusions are summarized as follows: first, a new geometric parameter is introduced as a function of the plan area index and the aspect ratio so as to express the increase in virtual density that causes wind speed reduction. Second, the estimated wind speed ratios in the range 0.05 coefficients between the wind speeds averaged over the entire region, and the front or side region values are larger than 0.8. In contrast, in areas where the influence of roughness elements is significant, such as behind a building, the wind speeds are weakly correlated.

  20. Compost feedstock characteristics and ratio modelling for organic waste materials co-composting in Malaysia.

    Science.gov (United States)

    Chai, E W; H'ng, P S; Peng, S H; Wan-Azha, W M; Chin, K L; Chow, M J; Wong, W Z

    2013-01-01

    In Malaysia, large amounts of organic materials, which lead to disposal problems, are generated from agricultural residues especially from palm oil industries. Increasing landfill costs and regulations, which limit many types of waste accepted at landfills, have increased the interest in composting as a component of waste management. The objectives of this study were to characterize compost feedstock properties of common organic waste materials available in Malaysia. Thus, a ratio modelling of matching ingredients for empty fruit bunches (EFBs) co-composting using different organic materials in Malaysia was done. Organic waste materials with a C/N ratio of composting. The outcome of this study suggested that the percentage of EFB ranged between 50% and 60%, which is considered as the ideal mixing ratio in EFB co-composting. Conclusively, EFB can be utilized in composting if appropriate feedstock in term of physical and chemical characteristics is coordinated in the co-composting process.

  1. Interband B (E2) ratios in the rigid triaxial model, a review

    International Nuclear Information System (INIS)

    Gupta, J.B.; Sharma, S.

    1989-01-01

    Uptodate accurate extensive data on γ-g B(E2) ratios for even-even rare-earth nuclei is compared with the predictions of the rigid triaxial model of collective rotation to search for a correlation between the nuclear structure variation with Z, N and the γ 0 parameter of the model. The internal consistency in the predictions of the model is investigated and the spectral features vis-a-vis the γ-soft and the γ-rigid potential are discussed. (orig.)

  2. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  3. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  4. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method

    DEFF Research Database (Denmark)

    Valentin, Jan B.; Andreetta, Christian; Boomsma, Wouter

    2014-01-01

    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length...... the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins....... The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. © 2013 Wiley Periodicals, Inc....

  5. Two-Part Models for Fractional Responses Defined as Ratios of Integers

    Directory of Open Access Journals (Sweden)

    Harald Oberhofer

    2014-09-01

    Full Text Available This paper discusses two alternative two-part models for fractional response variables that are defined as ratios of integers. The first two-part model assumes a Binomial distribution and known group size. It nests the one-part fractional response model proposed by Papke and Wooldridge (1996 and, thus, allows one to apply Wald, LM and/or LR tests in order to discriminate between the two models. The second model extends the first one by allowing for overdispersion in the data. We demonstrate the usefulness of the proposed two-part models for data on the 401(k pension plan participation rates used in Papke and Wooldridge (1996.

  6. A hypothetical model for predicting the toxicity of high aspect ratio nanoparticles (HARN)

    Science.gov (United States)

    Tran, C. L.; Tantra, R.; Donaldson, K.; Stone, V.; Hankin, S. M.; Ross, B.; Aitken, R. J.; Jones, A. D.

    2011-12-01

    The ability to predict nanoparticle (dimensional structures which are less than 100 nm in size) toxicity through the use of a suitable model is an important goal if nanoparticles are to be regulated in terms of exposures and toxicological effects. Recently, a model to predict toxicity of nanoparticles with high aspect ratio has been put forward by a consortium of scientists. The High aspect ratio nanoparticles (HARN) model is a platform that relates the physical dimensions of HARN (specifically length and diameter ratio) and biopersistence to their toxicity in biological environments. Potentially, this model is of great public health and economic importance, as it can be used as a tool to not only predict toxicological activity but can be used to classify the toxicity of various fibrous nanoparticles, without the need to carry out time-consuming and expensive toxicology studies. However, this model of toxicity is currently hypothetical in nature and is based solely on drawing similarities in its dimensional geometry with that of asbestos and synthetic vitreous fibres. The aim of this review is two-fold: (a) to present findings from past literature, on the physicochemical property and pathogenicity bioassay testing of HARN (b) to identify some of the challenges and future research steps crucial before the HARN model can be accepted as a predictive model. By presenting what has been done, we are able to identify scientific challenges and research directions that are needed for the HARN model to gain public acceptance. Our recommendations for future research includes the need to: (a) accurately link physicochemical data with corresponding pathogenicity assay data, through the use of suitable reference standards and standardised protocols, (b) develop better tools/techniques for physicochemical characterisation, (c) to develop better ways of monitoring HARN in the workplace, (d) to reliably measure dose exposure levels, in order to support future epidemiological

  7. FLOOD HAZARD MAP IN THE CITY OF BATNA (ALGERIA BY HYDRAULIC MODELING APPROCH

    Directory of Open Access Journals (Sweden)

    Guellouh SAMI

    2016-06-01

    Full Text Available In the light of the global climatic changes that appear to influence the frequency and the intensity of floods, and whose damages are still growing; understanding the hydrological processes, their spatiotemporal setting and their extreme shape, became a paramount concern to local communities in forecasting terms. The aim of this study is to map the floods hazard using a hydraulic modeling method. In fact, using the operating Geographic Information System (GIS, would allow us to perform a more detailed spatial analysis about the extent of the flooding risk, through the approval of the hydraulic modeling programs in different frequencies. Based on the results of this analysis, decision makers can implement a strategy of risk management related to rivers overflowing through the city of Batna.

  8. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...... offered by lifelong habits. Consumers found it utterly unconvincing that, all of a sudden, they should regard their everyday foods as toxic and therefore it might not be possible to effectively communicate the health benefits of some novel foods to consumers. Several misconceptions became apparent...... consumers realize that novel foods other than GM foods do not have to undergo environmental risk assessment. Another implication for risk management appeared as consumers did not demand any participatory elements in the risk analysis process. Consumers talked extensively about normative, governance...

  9. Proportional hazards model with varying coefficients for length-biased data.

    Science.gov (United States)

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  10. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  11. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2006-04-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  12. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2007-08-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  13. A Reduced Model for Salt-Finger Convection in the Small Diffusivity Ratio Limit

    Directory of Open Access Journals (Sweden)

    Jin-Han Xie

    2017-01-01

    Full Text Available A simple model of nonlinear salt-finger convection in two dimensions is derived and studied. The model is valid in the limit of a small solute to heat diffusivity ratio and a large density ratio, which is relevant to both oceanographic and astrophysical applications. Two limits distinguished by the magnitude of the Schmidt number are found. For order one Schmidt numbers, appropriate for astrophysical applications, a modified Rayleigh–Bénard system with large-scale damping due to a stabilizing temperature is obtained. For large Schmidt numbers, appropriate for the oceanic setting, the model combines a prognostic equation for the solute field and a diagnostic equation for inertia-free momentum dynamics. Two distinct saturation regimes are identified for the second model: the weakly driven regime is characterized by a large-scale flow associated with a balance between advection and linear instability, while the strongly-driven regime produces multiscale structures, resulting in a balance between energy input through linear instability and energy transfer between scales. For both regimes, we analytically predict and numerically confirm the dependence of the kinetic energy and salinity fluxes on the ratio between solutal and thermal Rayleigh numbers. The spectra and probability density functions are also computed.

  14. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  15. Parton distributions and EMC ratios of the 6Li nucleus in the constituent quark exchange model

    Science.gov (United States)

    Modarres, M.; Hadian, A.

    2017-10-01

    While the constituent quark model (CQM), in which the quarks are assumed to be the complex objects, is used to calculate the parton distribution functions of the iso-scalar lithium-6 (6Li) nucleus, the u-d constituent quark distribution functions of the 6Li nucleus are evaluated from the valence quark exchange formalism (VQEF) for the A = 6 iso-scalar system. After computing the valence quark, sea quark, and gluon distribution functions in the constituent quark exchange model (CQEM, i.e., CQM +VQEF), the nucleus structure function is calculated for the 6Li nucleus at the leading order (LO) and the next-to-leading-order (NLO) levels to extract the European muon collaboration (EMC) ratio, at different hard scales, using the standard Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGALP) evolution equations. The outcomes are compared with those of our previous works and the available NMC experimental data, and various physical points are discussed. It is observed that the present EMC ratios are considerably improved compared with those of our previous works, in which only the valence quark distributions were considered to calculate the EMC ratio, and are closer to the NMC data. Finally, it is concluded that at a given appropriate hard scale, the LO approximation may be enough for calculating the nucleus EMC ratio.

  16. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models.

    Science.gov (United States)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-11-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results

  17. Issues in testing the new national seismic hazard model for Italy

    Science.gov (United States)

    Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.

    2016-12-01

    It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works

  18. Application of statistical and dynamics models for snow avalanche hazard assessment in mountain regions of Russia

    Science.gov (United States)

    Turchaninova, A.

    2012-04-01

    The estimation of extreme avalanche runout distances, flow velocities, impact pressures and volumes is an essential part of snow engineering in mountain regions of Russia. It implies the avalanche hazard assessment and mapping. Russian guidelines accept the application of different avalanche models as well as approaches for the estimation of model input parameters. Consequently different teams of engineers in Russia apply various dynamics and statistical models for engineering practice. However it gives more freedom to avalanche practitioners and experts but causes lots of uncertainties in case of serious limitations of avalanche models. We discuss these problems by presenting the application results of different well known and widely used statistical (developed in Russia) and avalanche dynamics models for several avalanche test sites in the Khibini Mountains (The Kola Peninsula) and the Caucasus. The most accurate and well-documented data from different powder and wet, big rare and small frequent snow avalanche events is collected from 1960th till today in the Khibini Mountains by the Avalanche Safety Center of "Apatit". This data was digitized and is available for use and analysis. Then the detailed digital avalanche database (GIS) was created for the first time. It contains contours of observed avalanches (ESRI shapes, more than 50 years of observations), DEMs, remote sensing data, description of snow pits, photos etc. Thus, the Russian avalanche data is a unique source of information for understanding of an avalanche flow rheology and the future development and calibration of the avalanche dynamics models. GIS database was used to analyze model input parameters and to calibrate and verify avalanche models. Regarding extreme dynamic parameters the outputs using different models can differ significantly. This is unacceptable for the engineering purposes in case of the absence of the well-defined guidelines in Russia. The frequency curves for the runout distance

  19. Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain

    Science.gov (United States)

    Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.

    2015-04-01

    were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.

  20. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)

    2016-05-15

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  1. Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Coppersmith, K.J.; Youngs, R.R.

    1992-01-01

    In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting

  2. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  3. Mendel's use of mathematical modelling: ratios, predictions and the appeal to tradition.

    Science.gov (United States)

    Teicher, Amir

    2014-01-01

    The seventh section of Gregor Mendel's famous 1866 paper contained a peculiar mathematical model, which predicted the expected ratios between the number of constant and hybrid types, assuming self-pollination continued throughout further generations. This model was significant for Mendel's argumentation and was perceived as inseparable from his entire theory at the time. A close examination of this model reveals that it has several perplexing aspects which have not yet been systematically scrutinized. The paper analyzes those aspects, dispels some common misconceptions regarding the interpretation of the model, and re-evaluates the role of this model for Mendel himself. In light of the resulting analysis, Mendel's position between nineteenth-century hybridist tradition and twentieth-century population genetics is reassessed, and his sophisticated use of mathematics to legitimize his innovative theory is uncovered.

  4. Averaged Solar Radiation Pressure Modeling for High Area-to-Mass Ratio Objects in Geostationary Space

    Science.gov (United States)

    Eapen, Roshan Thomas

    Space Situational Awareness is aimed at providing timely and accurate information of the space environment. This was originally done by maintaining a catalog of space objects states (position and velocity). Traditionally, a cannonball model would be used to propagate the dynamics. This can be acceptable for an active satellite since its attitude motion can be stabilized. However, for non-functional space debris, the cannonball model would disappoint because it is attitude independent and the debris is prone to tumbling. Furthermore, high area-to-mass ratio objects are sensitive to very small changes in perturbations, particularly those of the non-conservative kind. This renders the cannonball model imprecise in propagating the orbital motion of such objects. With the ever-increasing population of man-made space debris, in-orbit explosions, collisions and potential impacts of near Earth objects, it has become imperative to modify the traditional approach to a more predictive, tactical and exact rendition. Hence, a more precise orbit propagation model needs to be developed which warrants a better understanding of the perturbations in the near Earth space. The attitude dependency of some perturbations renders the orbit-attitude motion to be coupled. In this work, a coupled orbit-attitude model is developed taking both conservative and non-conservative forces and torques into account. A high area-to-mass ratio multi-layer insulation in geostationary space is simulated using the coupled dynamics model. However, the high fidelity model developed is computationally expensive. This work aims at developing a model to average the short-term solar radiation pressure force to perform computationally better than the cannonball model and concurrently have a comparable fidelity to the coupled orbit-attitude model.

  5. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  6. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    Science.gov (United States)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  7. A 3-dimensional in vitro model of epithelioid granulomas induced by high aspect ratio nanomaterials

    Directory of Open Access Journals (Sweden)

    Hurt Robert H

    2011-05-01

    Full Text Available Abstract Background The most common causes of granulomatous inflammation are persistent pathogens and poorly-degradable irritating materials. A characteristic pathological reaction to intratracheal instillation, pharyngeal aspiration, or inhalation of carbon nanotubes is formation of epithelioid granulomas accompanied by interstitial fibrosis in the lungs. In the mesothelium, a similar response is induced by high aspect ratio nanomaterials, including asbestos fibers, following intraperitoneal injection. This asbestos-like behaviour of some engineered nanomaterials is a concern for their potential adverse health effects in the lungs and mesothelium. We hypothesize that high aspect ratio nanomaterials will induce epithelioid granulomas in nonadherent macrophages in 3D cultures. Results Carbon black particles (Printex 90 and crocidolite asbestos fibers were used as well-characterized reference materials and compared with three commercial samples of multiwalled carbon nanotubes (MWCNTs. Doses were identified in 2D and 3D cultures in order to minimize acute toxicity and to reflect realistic occupational exposures in humans and in previous inhalation studies in rodents. Under serum-free conditions, exposure of nonadherent primary murine bone marrow-derived macrophages to 0.5 μg/ml (0.38 μg/cm2 of crocidolite asbestos fibers or MWCNTs, but not carbon black, induced macrophage differentiation into epithelioid cells and formation of stable aggregates with the characteristic morphology of granulomas. Formation of multinucleated giant cells was also induced by asbestos fibers or MWCNTs in this 3D in vitro model. After 7-14 days, macrophages exposed to high aspect ratio nanomaterials co-expressed proinflammatory (M1 as well as profibrotic (M2 phenotypic markers. Conclusions Induction of epithelioid granulomas appears to correlate with high aspect ratio and complex 3D structure of carbon nanotubes, not with their iron content or surface area. This model

  8. Challenges in understanding, modelling, and mitigating Lake Outburst Flood Hazard: experiences from Central Asia

    Science.gov (United States)

    Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.

    2010-05-01

    Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also

  9. Risk Evaluation of Debris Flow Hazard Based on Asymmetric Connection Cloud Model

    Directory of Open Access Journals (Sweden)

    Xinyu Xu

    2017-01-01

    Full Text Available Risk assessment of debris flow is a complex problem involving various uncertainty factors. Herein, a novel asymmetric cloud model coupled with connection number was described here to take into account the fuzziness and conversion situation of classification boundary and interval nature of evaluation indicators for risk assessment of debris flow hazard. In the model, according to the classification standard, the interval lengths of each indicator were first specified to determine the digital characteristic of connection cloud at different levels. Then the asymmetric connection clouds in finite intervals were simulated to analyze the certainty degree of measured indicator to each evaluation standard. Next, the integrated certainty degree to each grade was calculated with corresponding indicator weight, and the risk grade of debris flow was determined by the maximum integrated certainty degree. Finally, a case study and comparison with other methods were conducted to confirm the reliability and validity of the proposed model. The result shows that this model overcomes the defect of the conventional cloud model and also converts the infinite interval of indicators distribution into finite interval, which makes the evaluation result more reasonable.

  10. The Hazard Analysis and Critical Control Points (HACCP) generic model for the production of Thai fermented pork sausage (Nham).

    Science.gov (United States)

    Paukatong, K V; Kunawasen, S

    2001-01-01

    Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.

  11. A survey of basic reproductive ratios in vector-borne disease transmission modeling

    Science.gov (United States)

    Soewono, E.; Aldila, D.

    2015-03-01

    Vector-borne diseases are commonly known in tropical and subtropical countries. These diseases have contributed to more than 10% of world infectious disease cases. Among the vectors responsible for transmitting the diseases are mosquitoes, ticks, fleas, flies, bugs and worms. Several of the diseases are known to contribute to the increasing threat to human health such as malaria, dengue, filariasis, chikungunya, west nile fever, yellow fever, encephalistis, and anthrax. It is necessary to understand the real process of infection, factors which contribute to the complication of the transmission in order to come up with a good and sound mathematical model. Although it is not easy to simulate the real transmission process of the infection, we could say that almost all models have been developed from the already long known Host-Vector model. It constitutes the main transmission processes i.e. birth, death, infection and recovery. From this simple model, the basic concepts of Disease Free and Endemic Equilibria and Basic Reproductive Ratio can be well explained and understood. Theoretical, modeling, control and treatment aspects of disease transmission problems have then been developed for various related diseases. General construction as well as specific forms of basic reproductive ratios for vector-borne diseases are discusses here.

  12. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  13. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques

    Science.gov (United States)

    Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo

    2017-11-01

    The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.

  14. Modelling the impacts of coastal hazards on land-use development

    Science.gov (United States)

    Ramirez, J.; Vafeidis, A. T.

    2009-04-01

    Approximately 10% of the world's population live in close proximity to the coast and are potentially susceptible to tropical or extra-tropical storm-surge events. These events will be exacerbated by projected sea-level rise (SLR) in the 21st century. Accelerated SLR is one of the more certain impacts of global warming and can have major effects on humans and ecosystems. Of particular vulnerability are densely populated coastal urban centres containing globally important commercial resources, with assets in the billions USD. Moreover, the rates of growth of coastal populations, which are reported to be growing faster than the global means, are leading to increased human exposure to coastal hazards. Consequently, potential impacts of coastal hazards can be significant in the future and will depend on various factors but actual impacts can be considerably reduced by appropriate human decisions on coastal land-use management. At the regional scale, it is therefore necessary to identify which coastal areas are vulnerable to these events and explore potential long-term responses reflected in land usage. Land-use change modelling is a technique which has been extensively used in recent years for studying the processes and mechanisms that govern the evolution of land use and which can potentially provide valuable information related to the future coastal development of regions that are vulnerable to physical forcings. Although studies have utilized land-use classification maps to determine the impact of sea-level rise, few use land-use projections to make these assessments, and none have considered adaptive behaviour of coastal dwellers exposed to hazards. In this study a land-use change model, which is based on artificial neural networks (ANN), was employed for predicting coastal urban and agricultural development. The model uses as inputs a series of spatial layers, which include information on population distribution, transportation networks, existing urban centres, and

  15. Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera

    CERN Document Server

    Bevilacqua, Andrea

    2016-01-01

    This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.

  16. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  17. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method.

    Science.gov (United States)

    Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas

    2014-02-01

    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. Copyright © 2013 Wiley Periodicals, Inc.

  18. Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation

    Science.gov (United States)

    Borga, M.; Creutin, J. D.

    Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two

  19. Predictive models of objective oropharyngeal OSA surgery outcomes: Success rate and AHI reduction ratio.

    Science.gov (United States)

    Choi, Ji Ho; Lee, Jae Yong; Cha, Jaehyung; Kim, Kangwoo; Hong, Seung-No; Lee, Seung Hoon

    2017-01-01

    The aim of this study was to develop a predictive model of objective oropharyngeal obstructive sleep apnea (OSA) surgery outcomes including success rate and apnea-hypopnea index (AHI) reduction ratio in adult OSA patients. Retrospective outcome research. All subjects with OSA who underwent oropharyngeal and/or nasal surgery and were followed for at least 3 months were enrolled in this study. Demographic, anatomical [tonsil size (TS) and palate-tongue position (PTP) grade (Gr)], and polysomnographic parameters were analyzed. The AHI reduction ratio (%) was defined as [(postoperative AHI-preoperative AHI) x 100 / postoperative AHI], and surgical success was defined as a ≥ 50% reduction in preoperative AHI with a postoperative AHI predictive equation by Forward Selection likelihood ratio (LR) logistic regression analysis was: [Formula: see text]The best predictive equation according to stepwise multiple linear regression analysis was: [Formula: see text] (TS/PTP Gr = 1 if TS/PTP Gr 3 or 4, TS/PTP Gr = 0 if TS/PTP Gr 1 or 2). The predictive models for oropharyngeal surgery described in this study may be useful for planning surgical treatments and improving objective outcomes in adult OSA patients.

  20. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    Directory of Open Access Journals (Sweden)

    Yingjun Jiang

    2015-04-01

    Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.

  1. Predictive models of objective oropharyngeal OSA surgery outcomes: Success rate and AHI reduction ratio.

    Directory of Open Access Journals (Sweden)

    Ji Ho Choi

    Full Text Available The aim of this study was to develop a predictive model of objective oropharyngeal obstructive sleep apnea (OSA surgery outcomes including success rate and apnea-hypopnea index (AHI reduction ratio in adult OSA patients.Retrospective outcome research.All subjects with OSA who underwent oropharyngeal and/or nasal surgery and were followed for at least 3 months were enrolled in this study. Demographic, anatomical [tonsil size (TS and palate-tongue position (PTP grade (Gr], and polysomnographic parameters were analyzed. The AHI reduction ratio (% was defined as [(postoperative AHI-preoperative AHI x 100 / postoperative AHI], and surgical success was defined as a ≥ 50% reduction in preoperative AHI with a postoperative AHI < 20.A total of 156 consecutive OSAS adult patients (mean age ± SD = 38.9 ± 9.6, M / F = 149 / 7 were included in this study. The best predictive equation by Forward Selection likelihood ratio (LR logistic regression analysis was: [Formula: see text]The best predictive equation according to stepwise multiple linear regression analysis was: [Formula: see text] (TS/PTP Gr = 1 if TS/PTP Gr 3 or 4, TS/PTP Gr = 0 if TS/PTP Gr 1 or 2.The predictive models for oropharyngeal surgery described in this study may be useful for planning surgical treatments and improving objective outcomes in adult OSA patients.

  2. IMPLICATIONS FOR ASYMMETRY, NONPROPORTIONALITY, AND HETEROGENEITY IN BRAND SWITCHING FROM PIECE-WISE EXPONENTIAL MIXTURE HAZARD MODELS

    NARCIS (Netherlands)

    WEDEL, M; KAMAKURA, WA; DESARBO, WS; TERHOFSTEDE, F

    1995-01-01

    The authors develop a class of mixtures of piece-wise exponential hazard models for the analysis of brand switching behavior. The models enable the effects of marketing variables to change nonproportionally over time and can, simultaneously, be used to identify segments among which switching and

  3. Multiple imputation of missing covariates for the Cox proportional hazards cure model.

    Science.gov (United States)

    Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G

    2016-11-20

    We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects 'cured', and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification. We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    Science.gov (United States)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  5. CON4EI: Evaluation of QSAR models for hazard identification and labelling of eye irritating chemicals.

    Science.gov (United States)

    Geerts, L; Adriaens, E; Alépée, N; Guest, R; Willoughby, J A; Kandarova, H; Drzewiecka, A; Fochtman, P; Verstraelen, S; Van Rompay, A R

    2017-09-21

    Assessment of ocular irritation is a regulatory requirement in safety evaluation of industrial and consumer products. Although a number of in vitro ocular irritation assays exist, none are capable of fully categorizing chemicals as stand-alone assays. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was developed to assess the reliability of eight in vitro test methods and computational models as well as establishing an optimal tiered-testing strategy. For three computational models (Toxtree, and Case Ultra EYE_DRAIZE and EYE_IRR) performance parameters were calculated. Coverage ranged from 15 to 58%. Coverage was 2 to 3.4 times higher for liquids than for solids. The lowest number of false positives (5%) was reached with EYE_IRR; this model however also gave a high number of false negatives (46%). The lowest number of false negatives (25%) was seen with Toxtree; for liquids Toxtree predicted the lowest number of false negatives (11%), for solids EYE_DRAIZE did (17%). It can be concluded that the training sets should be enlarged with high quality data. The tested models are not yet sufficiently powerful for stand-alone evaluations, but that they can surely become of value in an integrated weight-of-evidence approach in hazard assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Revisiting the concept of Redfield ratios applied to plankton stoichiometry - Addressing model uncertainties with respect to the choice of C:N:P ratios for phytoplankton

    Science.gov (United States)

    Kreus, Markus; Paetsch, Johannes; Grosse, Fabian; Lenhart, Hermann; Peck, Myron; Pohlmann, Thomas

    2017-04-01

    Ongoing Ocean Acidification (OA) and climate change related trends impact on physical (temperature), chemical (CO2 buffer capacity) and biological (stoichiometric) properties of the marine environment. These threats affect the global ocean but they appear particularly pronounced in marginal and shelf seas. Marine biogeochemical models are often used to investigate the impacts of climate change and changes in OA on the marine system as well as its exchange with the atmosphere. Different studies showed that both the structural composition of the models and the elemental ratios of particulate organic matter in the surface ocean affect the key processes controlling the ocean's efficiency storing atmospheric excess carbon. Recent studies focus on the variability of the elemental ratios of phytoplankton and found that the high plasticity of C:N:P ratios enables the storage of large amounts of carbon by incorporation into carbohydrates and lipids. Our analysis focuses on the North Sea, a temperate European shelf sea, for the period 2000-2014. We performed an ensemble of model runs differing only in phytoplankton stoichiometry, representing combinations of C:P = [132.5, 106, 79.5] and N:P=[20, 16, 12] (i.e., Redfield ratio +/- 25%). We examine systematically the variations in annual averages of net primary production (NPP), net ecosystem production in the upper 30 m (NEP30), export production below 30 m depth (EXP30), and the air-sea flux of CO2 (ASF). Ensemble average fluxes (and standard deviations) resulted in NPP = 15.4 (2.8) mol C m-2 a-1, NEP30 = 5.4 (1.1) mol C m-2 a-1, EXP30 = 8.1 (1.1) mol C m-2 a-1 and ASF = 1.1 (0.5) mol C m-2 a-1. All key parameters exhibit only minor variations along the axis of constant C:N, but correlate positively with increasing C:P and decreasing N:P ratios. Concerning regional differences, lowest variations in local fluxes due to different stoichiometric ratios can be found in the shallow southern and coastal North Sea. Highest

  7. Computational Package for Copolymerization Reactivity Ratio Estimation: Improved Access to the Error-in-Variables-Model

    Directory of Open Access Journals (Sweden)

    Alison J. Scott

    2018-01-01

    Full Text Available The error-in-variables-model (EVM is the most statistically correct non-linear parameter estimation technique for reactivity ratio estimation. However, many polymer researchers are unaware of the advantages of EVM and therefore still choose to use rather erroneous or approximate methods. The procedure is straightforward but it is often avoided because it is seen as mathematically and computationally intensive. Therefore, the goal of this work is to make EVM more accessible to all researchers through a series of focused case studies. All analyses employ a MATLAB-based computational package for copolymerization reactivity ratio estimation. The basis of the package is previous work in our group over many years. This version is an improvement, as it ensures wider compatibility and enhanced flexibility with respect to copolymerization parameter estimation scenarios that can be considered.

  8. Aspect Ratio of Receiver Node Geometry based Indoor WLAN Propagation Model

    Science.gov (United States)

    Naik, Udaykumar; Bapat, Vishram N.

    2017-08-01

    This paper presents validation of indoor wireless local area network (WLAN) propagation model for varying rectangular receiver node geometry. The rectangular client node configuration is a standard node arrangement in computer laboratories of academic institutes and research organizations. The model assists to install network nodes for the better signal coverage. The proposed model is backed by wide ranging real time received signal strength measurements at 2.4 GHz. The shadow fading component of signal propagation under realistic indoor environment is modelled with the dependency on varying aspect ratio of the client node geometry. The developed new model is useful in predicting indoor path loss for IEEE 802.11b/g WLAN. The new model provides better performance in comparison to well known International Telecommunication Union and free space propagation models. It is shown that the proposed model is simple and can be a useful tool for indoor WLAN node deployment planning and quick method for the best utilisation of the office space.

  9. Allometric Scaling and Cell Ratios in Multi-Organ in vitro Models of Human Metabolism

    Science.gov (United States)

    Ucciferri, Nadia; Sbrana, Tommaso; Ahluwalia, Arti

    2014-01-01

    Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step toward building an integrated picture of systemic metabolism and signaling in physiological or pathological conditions. However, the rational design of in vitro models of cell–cell or cell–tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed, very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here, we analyze the physiological relationship between cells, cell metabolism, and exchange in the human body using allometric rules, downscaling them to an organ-on-a-plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (cell number scaling model and metabolic and surface scaling model) are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a-plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions, which can be extrapolated to the in vivo reality. PMID:25566537

  10. Allometric Scaling and Cell Ratios in Multi-Organ in vitro Models of Human Metabolism

    International Nuclear Information System (INIS)

    Ucciferri, Nadia; Sbrana, Tommaso; Ahluwalia, Arti

    2014-01-01

    Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step toward building an integrated picture of systemic metabolism and signaling in physiological or pathological conditions. However, the rational design of in vitro models of cell–cell or cell–tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed, very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here, we analyze the physiological relationship between cells, cell metabolism, and exchange in the human body using allometric rules, downscaling them to an organ-on-a-plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (cell number scaling model and metabolic and surface scaling model) are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a-plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions, which can be extrapolated to the in vivo reality.

  11. Development of a Probabilistic Tornado Wind Hazard Model for the Continental United States Volume I: Main Report

    International Nuclear Information System (INIS)

    Boissonnade, A; Hossain, Q; Kimball, J

    2000-01-01

    Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States

  12. Scaling model for high-aspect-ratio microballoon direct-drive implosions at short laser wavelengths

    International Nuclear Information System (INIS)

    Schirmann, D.; Juraszek, D.; Lane, S.M.; Campbell, E.M.

    1992-01-01

    A scaling model for hot spherical ablative implosions in direct-drive mode is presented. The model results have been compared with experiments from LLE, ILE, and LLNL. Reduction of the neutron yield due to illumination nonuniformities is taken into account by the assumption that the neutron emission is cut off when the gas shock wave reflected off the center meets the incoming pusher, i.e., at a time when the probability of shell breakup is greatly enhanced. The main advantage of this semiempirical scaling model is that it elucidates the principal features of these simple implosions and permits one to estimate very quickly the performance of a high-aspect-ratio direct-drive target illuminated by short-wavelength laser light. (Author)

  13. [Application of occupational hazard risk index model in occupational health risk assessment in a decorative coating manufacturing enterprises].

    Science.gov (United States)

    He, P L; Zhao, C X; Dong, Q Y; Hao, S B; Xu, P; Zhang, J; Li, J G

    2018-01-20

    Objective: To evaluate the occupational health risk of decorative coating manufacturing enterprises and to explore the applicability of occupational hazard risk index model in the health risk assessment, so as to provide basis for the health management of enterprises. Methods: A decorative coating manufacturing enterprise in Hebei Province was chosen as research object, following the types of occupational hazards and contact patterns, the occupational hazard risk index model was used to evaluate occupational health risk factors of occupational hazards in the key positions of the decorative coating manufacturing enterprise, and measured with workplace test results and occupational health examination. Results: The positions of oily painters, water-borne painters, filling workers and packers who contacted noise were moderate harm. And positions of color workers who contacted chromic acid salts, oily painters who contacted butyl acetate were mild harm. Other positions were harmless. The abnormal rate of contacting noise in physical examination results was 6.25%, and the abnormality was not checked by other risk factors. Conclusion: The occupational hazard risk index model can be used in the occupational health risk assessment of decorative coating manufacturing enterprises, and noise was the key harzard among occupational harzards in this enterprise.

  14. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  15. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Science.gov (United States)

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  16. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  17. Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

    Science.gov (United States)

    Hurst, T.; Smith, W. D.; Bibby, H. M.

    2003-12-01

    We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

  18. Conceptual model of volcanism and volcanic hazards of the region of Ararat valley, Armenia

    Science.gov (United States)

    Meliksetian, Khachatur; Connor, Charles; Savov, Ivan; Connor, Laura; Navasardyan, Gevorg; Manucharyan, Davit; Ghukasyan, Yura; Gevorgyan, Hripsime

    2015-04-01

    Armenia and the adjacent volcanically active regions in Iran, Turkey and Georgia are located in the collision zone between the Arabian and Eurasian lithospheric plates. The majority of studies of regional collision related volcanism use the model proposed by Keskin, (2003) where volcanism is driven by Neo-Tethyan slab break-off. In Armenia, >500 Quaternary-Holocene volcanoes from the Gegham, Vardenis and Syunik volcanic fields are hosted within pull-apart structures formed by active faults and their segments (Karakhanyan et al., 2002), while tectonic position of the large in volume basalt-dacite Aragats volcano and periphery volcanic plateaus is different and its position away from major fault lines necessitates more complex volcano-tectonic setup. Our detailed volcanological, petrological and geochemical studies provide insight into the nature of such volcanic activity in the region of Ararat Valley. Most magmas, such as those erupted in Armenia are volatile-poor and erupt fairly hot. Here we report newly discovered tephra sequences in Ararat valley, that were erupted from historically active Ararat stratovolcano and provide evidence for explosive eruption of young, mid K2O calc-alkaline and volatile-rich (>4.6 wt% H2O; amph-bearing) magmas. Such young eruptions, in addition to the ignimbrite and lava flow hazards from Gegham and Aragats, present a threat to the >1.4 million people (~ ½ of the population of Armenia). We will report numerical simulations of potential volcanic hazards for the region of Ararat valley near Yerevan that will include including tephra fallout, lava flows and opening of new vents. Connor et al. (2012) J. Applied Volcanology 1:3, 1-19; Karakhanian et al. (2002), JVGR, 113, 319-344; Keskin, M. (2003) Geophys. Res. Lett. 30, 24, 8046.

  19. Local models for rainstorm-induced hazard analysis on Mediterranean river-torrential geomorphological systems

    Directory of Open Access Journals (Sweden)

    N. Diodato

    2004-01-01

    Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.

  20. Particle ratios from AGS to RHIC in an interacting hadronic model

    International Nuclear Information System (INIS)

    Zschiesche, D; Zeeb, G; Paech, K; Schramm, S; Stoecker, H

    2004-01-01

    The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) σ-ωapproach. The commonly adopted non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase transition occured in those collisions, 'freezing' of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The in-medium masses turn out to differ up to 150 MeV from their vacuum values

  1. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  2. Kernel-Based Visual Hazard Comparison (kbVHC): a Simulation-Free Diagnostic for Parametric Repeated Time-to-Event Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Välitalo, Pyry A J; Knibbe, Catherijne A J; Krekels, Elke H J

    2017-11-27

    Repeated time-to-event (RTTE) models are the preferred method to characterize the repeated occurrence of clinical events. Commonly used diagnostics for parametric RTTE models require representative simulations, which may be difficult to generate in situations with dose titration or informative dropout. Here, we present a novel simulation-free diagnostic tool for parametric RTTE models; the kernel-based visual hazard comparison (kbVHC). The kbVHC aims to evaluate whether the mean predicted hazard rate of a parametric RTTE model is an adequate approximation of the true hazard rate. Because the true hazard rate cannot be directly observed, the predicted hazard is compared to a non-parametric kernel estimator of the hazard rate. With the degree of smoothing of the kernel estimator being determined by its bandwidth, the local kernel bandwidth is set to the lowest value that results in a bootstrap coefficient of variation (CV) of the hazard rate that is equal to or lower than a user-defined target value (CV target ). The kbVHC was evaluated in simulated scenarios with different number of subjects, hazard rates, CV target values, and hazard models (Weibull, Gompertz, and circadian-varying hazard). The kbVHC was able to distinguish between Weibull and Gompertz hazard models, even when the hazard rate was relatively low (< 2 events per subject). Additionally, it was more sensitive than the Kaplan-Meier VPC to detect circadian variation of the hazard rate. An additional useful feature of the kernel estimator is that it can be generated prior to model development to explore the shape of the hazard rate function.

  3. The likelihood ratio test for cointegration ranks in the I(2) model

    DEFF Research Database (Denmark)

    Nielsen, Heino Bohn; Rahbek, Anders Christian

    2007-01-01

    This paper presents the likelihood ratio (LR) test for the number of cointegrating relations in the I(2) vector autoregressive model. It is shown that the asymptotic distribution of the LR test for the cointegration ranks is identical to the asymptotic distribution of the much applied test...... statistic based on the two-step estimation procedure in Johansen (1995, Econometric Theory 11, 25-59), Paruolo (1996, Journal of Econometrics 72, 313-356), and Rahbek, Kongsted, and Jørgensen (1999, Journal of Econometrics 90, 265-289). By construction the LR test statistic is smaller than the non-LR test...

  4. Comparison of Statistical Data Models for Identifying Differentially Expressed Genes Using a Generalized Likelihood Ratio Test

    Directory of Open Access Journals (Sweden)

    Kok-Yong Seng

    2008-01-01

    Full Text Available Currently, statistical techniques for analysis of microarray-generated data sets have deficiencies due to limited understanding of errors inherent in the data. A generalized likelihood ratio (GLR test based on an error model has been recently proposed to identify differentially expressed genes from microarray experiments. However, the use of different error structures under the GLR test has not been evaluated, nor has this method been compared to commonly used statistical tests such as the parametric t-test. The concomitant effects of varying data signal-to-noise ratio and replication number on the performance of statistical tests also remain largely unexplored. In this study, we compared the effects of different underlying statistical error structures on the GLR test’s power in identifying differentially expressed genes in microarray data. We evaluated such variants of the GLR test as well as the one sample t-test based on simulated data by means of receiver operating characteristic (ROC curves. Further, we used bootstrapping of ROC curves to assess statistical significance of differences between the areas under the curves. Our results showed that i the GLR tests outperformed the t-test for detecting differential gene expression, ii the identity of the underlying error structure was important in determining the GLR tests’ performance, and iii signal-to-noise ratio was a more important contributor than sample replication in identifying statistically significant differential gene expression.

  5. Modelling, Simulations, and Optimisation of Electric Vehicles for Analysis of Transmission Ratio Selection

    Directory of Open Access Journals (Sweden)

    Paul D. Walker

    2013-01-01

    Full Text Available Pure electric vehicles (PEVs provide a unique problem in powertrain design through the meeting of performance specifications whilst maximising driving range. The consideration of single speed and multispeed transmissions for electric vehicles provides two strategies for achieving desired range and performance specifications. Through the implementation of system level vehicle models, design analysis, and optimisation, this paper analyses the application of both single speed and two-speed transmission applications to electric vehicles. Initially, transmission ratios are designed based on grade and top speed requirements, and impact on vehicle traction curve is evaluated. Then performance studies are conducted for different transmission ratios using both single speed and two-speed powertrain configurations to provide a comparative assessment of the vehicles. Finally, multivariable optimisation in the form of genetic algorithms is employed to determine an optimal gear ratio selection for single speed and two-speed PEVs. Results demonstrate that the two-speed transmission is capable of achieving better results for performance requirements over a single speed transmission, including vehicle acceleration and grade climbing. However, the lower powertrain efficiency reduces the simulated range results.

  6. Integrating GIS with AHP and Fuzzy Logic to generate hand, foot and mouth disease hazard zonation (HFMD-HZ) model in Thailand

    Science.gov (United States)

    Samphutthanon, R.; Tripathi, N. K.; Ninsawat, S.; Duboz, R.

    2014-12-01

    The main objective of this research was the development of an HFMD hazard zonation (HFMD-HZ) model by applying AHP and Fuzzy Logic AHP methodologies for weighting each spatial factor such as disease incidence, socio-economic and physical factors. The outputs of AHP and FAHP were input into a Geographic Information Systems (GIS) process for spatial analysis. 14 criteria were selected for analysis as important factors: disease incidence over 10 years from 2003 to 2012, population density, road density, land use and physical features. The results showed a consistency ratio (CR) value for these main criteria of 0.075427 for AHP, the CR for FAHP results was 0.092436. As both remained below the threshold of 0.1, the CR value were acceptable. After linking to actual geospatial data (disease incidence 2013) through spatial analysis by GIS for validation, the results of the FAHP approach were found to match more accurately than those of the AHP approach. The zones with the highest hazard of HFMD outbreaks were located in two main areas in central Muang Chiang Mai district including suburbs and Muang Chiang Rai district including the vicinity. The produced hazardous maps may be useful for organizing HFMD protection plans.

  7. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  8. Geographic risk modeling of childhood cancer relative to county-level crops, hazardous air pollutants and population density characteristics in Texas

    Science.gov (United States)

    Thompson, James A; Carozza, Susan E; Zhu, Li

    2008-01-01

    Background Childhood cancer has been linked to a variety of environmental factors, including agricultural activities, industrial pollutants and population mixing, but etiologic studies have often been inconclusive or inconsistent when considering specific cancer types. More specific exposure assessments are needed. It would be helpful to optimize future studies to incorporate knowledge of high-risk locations or geographic risk patterns. The objective of this study was to evaluate potential geographic risk patterns in Texas accounting for the possibility that multiple cancers may have similar geographic risks patterns. Methods A spatio-temporal risk modeling approach was used, whereby 19 childhood cancer types were modeled as potentially correlated within county-years. The standard morbidity ratios were modeled as functions of intensive crop production, intensive release of hazardous air pollutants, population density, and rapid population growth. Results There was supportive evidence for elevated risks for germ cell tumors and "other" gliomas in areas of intense cropping and for hepatic tumors in areas of intense release of hazardous air pollutants. The risk for Hodgkin lymphoma appeared to be reduced in areas of rapidly growing population. Elevated spatial risks included four cancer histotypes, "other" leukemias, Central Nervous System (CNS) embryonal tumors, CNS other gliomas and hepatic tumors with greater than 95% likelihood of elevated risks in at least one county. Conclusion The Bayesian implementation of the Multivariate Conditional Autoregressive model provided a flexible approach to the spatial modeling of multiple childhood cancer histotypes. The current study identified geographic factors supporting more focused studies of germ cell tumors and "other" gliomas in areas of intense cropping, hepatic cancer near Hazardous Air Pollutant (HAP) release facilities and specific locations with increased risks for CNS embryonal tumors and for "other" leukemias

  9. Geographic risk modeling of childhood cancer relative to county-level crops, hazardous air pollutants and population density characteristics in Texas

    Directory of Open Access Journals (Sweden)

    Zhu Li

    2008-09-01

    Full Text Available Abstract Background Childhood cancer has been linked to a variety of environmental factors, including agricultural activities, industrial pollutants and population mixing, but etiologic studies have often been inconclusive or inconsistent when considering specific cancer types. More specific exposure assessments are needed. It would be helpful to optimize future studies to incorporate knowledge of high-risk locations or geographic risk patterns. The objective of this study was to evaluate potential geographic risk patterns in Texas accounting for the possibility that multiple cancers may have similar geographic risks patterns. Methods A spatio-temporal risk modeling approach was used, whereby 19 childhood cancer types were modeled as potentially correlated within county-years. The standard morbidity ratios were modeled as functions of intensive crop production, intensive release of hazardous air pollutants, population density, and rapid population growth. Results There was supportive evidence for elevated risks for germ cell tumors and "other" gliomas in areas of intense cropping and for hepatic tumors in areas of intense release of hazardous air pollutants. The risk for Hodgkin lymphoma appeared to be reduced in areas of rapidly growing population. Elevated spatial risks included four cancer histotypes, "other" leukemias, Central Nervous System (CNS embryonal tumors, CNS other gliomas and hepatic tumors with greater than 95% likelihood of elevated risks in at least one county. Conclusion The Bayesian implementation of the Multivariate Conditional Autoregressive model provided a flexible approach to the spatial modeling of multiple childhood cancer histotypes. The current study identified geographic factors supporting more focused studies of germ cell tumors and "other" gliomas in areas of intense cropping, hepatic cancer near Hazardous Air Pollutant (HAP release facilities and specific locations with increased risks for CNS embryonal tumors and

  10. Building an Ensemble Seismic Hazard Model for the Magnitude Distribution by Using Alternative Bayesian Implementations

    Science.gov (United States)

    Taroni, M.; Selva, J.

    2017-12-01

    In this work we show how we built an ensemble seismic hazard model for the magnitude distribution for the TSUMAPS-NEAM EU project (http://www.tsumaps-neam.eu/). The considered source area includes the whole NEAM region (North East Atlantic, Mediterranean and connected seas). We build our models by using the catalogs (EMEC and ISC), their completeness and the regionalization provided by the project. We developed four alternative implementations of a Bayesian model, considering tapered or truncated Gutenberg-Richter distributions, and fixed or variable b-value. The frequency size distribution is based on the Weichert formulation. This allows for simultaneously assessing all the frequency-size distribution parameters (a-value, b-value, and corner magnitude), using multiple completeness periods for the different magnitudes. With respect to previous studies, we introduce the tapered Pareto distribution (in addition to the classical truncated Pareto), and we build a novel approach to quantify the prior distribution. For each alternative implementation, we set the prior distributions using the global seismic data grouped according to the different types of tectonic setting, and assigned them to the related regions. The estimation is based on the complete (not declustered) local catalog in each region. Using the complete catalog also allows us to consider foreshocks and aftershocks in the seismic rate computation: the Poissonicity of the tsunami events (and similarly the exceedances of the PGA) will be insured by the Le Cam's theorem. This Bayesian approach provides robust estimations also in the zones where few events are available, but also leaves us the possibility to explore the uncertainty associated with the estimation of the magnitude distribution parameters (e.g. with the classical Metropolis-Hastings Monte Carlo method). Finally we merge all the models with their uncertainty to create the ensemble model that represents our knowledge of the seismicity in the

  11. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)

    2015-06-15

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.

  12. A Risk Assessment Model for Water Resources: releases of dangerous and hazardous substances.

    Science.gov (United States)

    Rebelo, Anabela; Ferra, Isabel; Gonçalves, Isolina; Marques, Albertina M

    2014-07-01

    Many dangerous and hazardous substances are used, transported and handled daily in diverse situations, from domestic use to industrial processing, and during those operations, spills or other anomalous situations may occur that can lead to contaminant releases followed by contamination of surface water or groundwater through direct or indirect pathways. When dealing with this problem, rapid, technically sound decisions are desirable, and the use of complex methods may not be able to deliver information quickly. This work describes a simple conceptual model established on multi-criteria based analysis involving a strategic appraisal for contamination risk assessment to support local authorities on rapid technical decisions. The model involves a screening for environmental risk sources, focussing on persistent, bioaccumulative and toxic (PBT) substances that may be discharged into water resources. It is a simple tool that can be used to follow-up actual accident scenarios in real time and to support daily activities, such as site-inspections. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Winter wheat response to irrigation, nitrogen fertilization, and cold hazards in the Community Land Model 5

    Science.gov (United States)

    Lu, Y.

    2017-12-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of earth's croplands. As such, it plays an important role in soil carbon balance, and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under changing climate, but also for understanding the energy and water cycles for winter wheat dominated regions. A winter wheat growth model has been developed in the Community Land Model 4.5 (CLM4.5), but its responses to irrigation and nitrogen fertilization have not been validated. In this study, I will validate winter wheat growth response to irrigation and nitrogen fertilization at five winter wheat field sites (TXLU, KSMA, NESA, NDMA, and ABLE) in North America, which were originally designed to understand winter wheat response to nitrogen fertilization and water treatments (4 nitrogen levels and 3 irrigation regimes). I also plan to further update the linkages between winter wheat yield and cold hazards. The previous cold damage function only indirectly affects yield through reduction on leaf area index (LAI) and hence photosynthesis, such approach could sometimes produce an unwanted higher yield when the reduced LAI saved more nutrient in the grain fill stage.

  14. Survival prediction based on compound covariate under Cox proportional hazard models.

    Directory of Open Access Journals (Sweden)

    Takeshi Emura

    Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  15. Estimating effects of rare haplotypes on failure time using a penalized Cox proportional hazards regression model

    Directory of Open Access Journals (Sweden)

    Tanck Michael WT

    2008-01-01

    Full Text Available Abstract Background This paper describes a likelihood approach to model the relation between failure time and haplotypes in studies with unrelated individuals where haplotype phase is unknown, while dealing with the problem of unstable estimates due to rare haplotypes by considering a penalized log-likelihood. Results The Cox model presented here incorporates the uncertainty related to the unknown phase of multiple heterozygous individuals as weights. Estimation is performed with an EM algorithm. In the E-step the weights are estimated, and in the M-step the parameter estimates are estimated by maximizing the expectation of the joint log-likelihood, and the baseline hazard function and haplotype frequencies are calculated. These steps are iterated until the parameter estimates converge. Two penalty functions are considered, namely the ridge penalty and a difference penalty, which is based on the assumption that similar haplotypes show similar effects. Simulations were conducted to investigate properties of the method, and the association between IL10 haplotypes and risk of target vessel revascularization was investigated in 2653 patients from the GENDER study. Conclusion Results from simulations and real data show that the penalized log-likelihood approach produces valid results, indicating that this method is of interest when studying the association between rare haplotypes and failure time in studies of unrelated individuals.

  16. Survival prediction based on compound covariate under Cox proportional hazard models.

    Science.gov (United States)

    Emura, Takeshi; Chen, Yi-Hau; Chen, Hsuan-Yu

    2012-01-01

    Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  17. Hydrodynamics of mangrove-type root models: the effect of porosity, spacing ratio and flexibility.

    Science.gov (United States)

    Kazemi, Amirkhosro; Van de Riet, Keith; Curet, Oscar M

    2017-09-21

    Mangrove trees play a prominent role in coastal tropic and subtropical regions, providing habitats for many organisms and protecting shorelines against high energy flows. In particular, the species Rhizophora mangle (red mangrove) exhibits complex cluster roots interacting with different hydrological flow conditions. To better understand the resilience of mangrove trees, we modeled the roots as a collection of cylinders with a circular pattern subject to unidirectional flow. We investigated the effect of porosity and spacing ratio between roots by varying both the diameter of the patch, D, and inset cylinders, d. In addition, we modeled hanging roots of red mangroves as cantilevered rigid cylinders on a hinge. Force and velocity measurements were performed in a water tunnel (Reynolds numbers from 2200 to 11 000). Concurrently, we performed 2D flow visualization using a flowing soap film. We found that the frequency of the vortex shedding increases as the diameter of the small cylinders decreases while the patch diameter is constant, therefore increasing the Strouhal number, [Formula: see text]. By comparing the change of Strouhal numbers with a single solid cylinder, we introduced a new length scale, the effective diameter. The effective diameter of the patch decreases as the porosity increases. In addition, we found that patch drag scales linearly with the patch diameter but decreases linearly as the spacing ratio increases. After a spacing ratio of ([Formula: see text]), the force scales linearly with the free stream velocity, and the mean velocity behind the patch is independent of the Reynolds number and the patch effect disappears. For flexible cylinders, we found that a decrease in stiffness increases both patch drag and the wake deficit behind the patch in a similar fashion as increasing the blockage of the patch. This information has the potential to help in the development of methods to design resilient bio-inspired coastline structures.

  18. An SIRS Epidemic Model with Vital Dynamics and a Ratio-Dependent Saturation Incidence Rate

    Directory of Open Access Journals (Sweden)

    Xinli Wang

    2015-01-01

    Full Text Available This paper presents an investigation on the dynamics of an epidemic model with vital dynamics and a nonlinear incidence rate of saturated mass action as a function of the ratio of the number of the infectives to that of the susceptibles. The stabilities of the disease-free equilibrium and the endemic equilibrium are first studied. Under the assumption of nonexistence of periodic solution, the global dynamics of the model is established: either the number of infective individuals tends to zero as time evolves or it produces bistability in which there is a region such that the disease will persist if the initial position lies in the region and disappears if the initial position lies outside this region. Computer simulation shows such results.

  19. Statistical power of likelihood ratio and Wald tests in latent class models with covariates.

    Science.gov (United States)

    Gudicha, Dereje W; Schmittmann, Verena D; Vermunt, Jeroen K

    2017-10-01

    This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null hypothesis and a non-central Chi-square under the alternative hypothesis. Power or sample-size computation using these asymptotic distributions requires specification of the non-centrality parameter, which in practice is rarely known. We show how to calculate this non-centrality parameter using a large simulated data set from the model under the alternative hypothesis. A simulation study is conducted evaluating the adequacy of the proposed power analysis methods, determining the key study design factor affecting the power level, and comparing the performance of the likelihood ratio and Wald test. The proposed power analysis methods turn out to perform very well for a broad range of conditions. Moreover, apart from effect size and sample size, an important factor affecting the power is the class separation, implying that when class separation is low, rather large sample sizes are needed to achieve a reasonable power level.

  20. The ultimate signal-to-noise ratio in realistic body models.

    Science.gov (United States)

    Guérin, Bastien; Villena, Jorge F; Polimeridis, Athanasios G; Adalsteinsson, Elfar; Daniel, Luca; White, Jacob K; Wald, Lawrence L

    2017-11-01

    We compute the ultimate signal-to-noise ratio (uSNR) and G-factor (uGF) in a realistic head model from 0.5 to 21 Tesla. We excite the head model and a uniform sphere with a large number of electric and magnetic dipoles placed at 3 cm from the object. The resulting electromagnetic fields are computed using an ultrafast volume integral solver, which are used as basis functions for the uSNR and uGF computations. Our generalized uSNR calculation shows good convergence in the sphere and the head and is in close agreement with the dyadic Green's function approach in the uniform sphere. In both models, the uSNR versus B 0 trend was linear at shallow depths and supralinear at deeper locations. At equivalent positions, the rate of increase of the uSNR with B 0 was greater in the sphere than in the head model. The uGFs were lower in the realistic head than in the sphere for acceleration in the anterior-posterior direction, but similar for the left-right direction. The uSNR and uGFs are computable in nonuniform body models and provide fundamental performance limits for human imaging with close-fitting MRI array coils. Magn Reson Med 78:1969-1980, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  2. A summary of hazard datasets and guidelines supported by the Global Earthquake Model during the first implementation phase

    Directory of Open Access Journals (Sweden)

    Marco Pagani

    2015-04-01

    Full Text Available The Global Earthquake Model (GEM initiative promotes open, transparent and collaborative science aimed at the assessment of earthquake risk and its reduction worldwide. During the first implementation phase (2009-2014 GEM sponsored five projects aimed at the creation of global datasets and guidelines toward the creation of open, transparent and, as far as possible, homogeneous hazard input models. These projects concentrated on the following global databases and models: an instrumental catalogue, a historical earthquake archive and catalogue, a geodetic strain rate model, a database of active faults, and set of ground motion prediction equations. This paper describes the main outcomes of these projects illustrating some initial applications as well as challenges in the creation of hazard models.

  3. A likelihood ratio model for the determination of the geographical origin of olive oil.

    Science.gov (United States)

    Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz

    2015-01-01

    Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    OpenAIRE

    Rey , Julien; Beauval , Céline; Douglas , John

    2018-01-01

    International audience; Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years’ worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very sho...

  5. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  6. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  7. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-05-20

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally....... The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock...... (1996). Secondly, our tests incorporate a “sign”restriction which generalizes the one-sided unit root test. We show that the asymptotic local power of the proposed tests dominates that of existing cointegration rank tests....

  9. Mesomechanical model and analysis of an artificial muscle functioning: role of Poisson’s ratio

    International Nuclear Information System (INIS)

    Shil’ko, Serge; Chernous, Dmitry; Basinyuk, Vladimir

    2016-01-01

    The mechanism of force generation in a polymer monofilament actuator element with auxetic characteristics is modeled to assess the development and the optimization of a controlled drive based on the use of electrostrictive polymers. The monofilament is considered as a viscoelastic rod. By assuming a ‘sliding thread’ deformation occurring within the system, the variation of the monofilament length during the uniform contraction and force generated during a uniaxial mode of actuation have been obtained. The distribution of the axial stress was determined along the length of the monofilament at various stages during the uniform contraction. The rate of contraction reaches a maximum, together with a minimum of the stress intensity when the equivalent Poisson’s ratio of the actuator is negative. (paper)

  10. Comparison Between Corn Evapotranspiration Rates by the Modified Bowen Ratio and the Ceres-Maize Model

    Directory of Open Access Journals (Sweden)

    Vitor de Jesus Martins Bianchini

    2017-07-01

    Full Text Available Corn stands out among grains because of its high global importance due to its chemical composition, nutritional value and productive potential. Several factors influence corn crop performance, and climate poses the greatest challenges for crop planning and management. Although tolerant to water deficit, the corn plant presents high sensitivity to water scarcity in specific developmental stages. Therefore, knowing factors related to water loss, namely potential or reference evapotranspiration (ET0 and Evapotranspiration (ETc is crucial. This work aimed to calculate the ET0 by the methods of Priestley and Taylor (1972 and Penman-Monteith and ETc of the corn crop, both using the CERES-MAIZE model, and compare them with the results observed by the Modified Bowen Ratio method.The field experiment was conducted in the experimental area of ESALQ / USP and sensors were installed for data collection. For the comparison of results, the CERES-MAIZE model, duly calibrated for the experimental conditions, was used. The results showed that ET0 was underestimated by the Penman-Monteith method and overestimated by the Priestley and Taylor method through the CERES-MAIZE model throughout the crop cycle. However, at the cycle end, the accumulated values were lower than those measured by the MBR method.

  11. Numerical modeling of debris avalanches at Nevado de Toluca (Mexico): implications for hazard evaluation and mapping

    Science.gov (United States)

    Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.

    2007-05-01

    The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations

  12. Flow-R, a model for susceptibility mapping of debris flows and other gravitational hazards at a regional scale

    Directory of Open Access Journals (Sweden)

    P. Horton

    2013-04-01

    Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time

  13. Kaon-pion ratio from ISR results and the derived sea level muon spectrum from Maeda's model

    CERN Document Server

    Bhattacharya, D P

    1978-01-01

    The sea-level muon spectrum has been calculated using Maeda's (1973) model. The contribution of the muon flux caused by kaon decay has been included in the calculation as the kaon-pion ratio. The value used for this ratio is that determined by the CERN Intersecting Storage Ring Group, Antinucci et al. (1973). (7 refs).

  14. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...

  15. Modeling high signal-to-noise ratio in a novel silicon MEMS microphone with comb readout

    Science.gov (United States)

    Manz, Johannes; Dehe, Alfons; Schrag, Gabriele

    2017-05-01

    Strong competition within the consumer market urges the companies to constantly improve the quality of their devices. For silicon microphones excellent sound quality is the key feature in this respect which means that improving the signal-to-noise ratio (SNR), being strongly correlated with the sound quality is a major task to fulfill the growing demands of the market. MEMS microphones with conventional capacitive readout suffer from noise caused by viscous damping losses arising from perforations in the backplate [1]. Therefore, we conceived a novel microphone design based on capacitive read-out via comb structures, which is supposed to show a reduction in fluidic damping compared to conventional MEMS microphones. In order to evaluate the potential of the proposed design, we developed a fully energy-coupled, modular system-level model taking into account the mechanical motion, the slide film damping between the comb fingers, the acoustic impact of the package and the capacitive read-out. All submodels are physically based scaling with all relevant design parameters. We carried out noise analyses and due to the modular and physics-based character of the model, were able to discriminate the noise contributions of different parts of the microphone. This enables us to identify design variants of this concept which exhibit a SNR of up to 73 dB (A). This is superior to conventional and at least comparable to high-performance variants of the current state-of-the art MEMS microphones [2].

  16. Adaptation of fugacity models to treat speciating chemicals with constant species concentration ratios.

    Science.gov (United States)

    Toose, Liisa K; Mackay, Donald

    2004-09-01

    A "multiplier" method is developed by which multimedia mass balance fugacity models designed to describe the fate of a single chemical species can be applied to chemicals that exist as several interconverting species. The method is applicable only when observed ratios of species concentrations in each phase are relatively constant and there is thus no need to define interspecies conversion rates. It involves the compilation of conventional transformation and intermedia transport rate expressions for a single, selected key species, and then a multiplier, Ri, is deduced for each of the other species. The total rate applicable to all species is calculated as the product of the rate for the single key species and a combined multiplier (1 + R2 + R3 + etc.). The theory is developed and illustrated by two examples. Limitations of the method are discussed, especially under conditions when conversion rates are uncertain. The advantage of this approach is that existing fugacity and concentration-based models that describe the fate of single-species chemicals can be readily adapted to estimate the fate of multispecies substances such as mercury which display relatively constant species proportions in each medium.

  17. Fatigue Modeling for Superelastic NiTi Considering Cyclic Deformation and Load Ratio Effects

    Science.gov (United States)

    Mahtabi, Mohammad J.; Shamsaei, Nima

    2017-09-01

    A cumulative energy-based damage model, called total fatigue toughness, is proposed for fatigue life prediction of superelastic NiTi alloys with various deformation responses (i.e., transformation stresses), which also accounts for the effects of mean strain and stress. Mechanical response of superelastic NiTi is highly sensitive to chemical composition, material processing, as well as operating temperature; therefore, significantly different deformation responses may be obtained for seemingly identical NiTi specimens. In this paper, a fatigue damage parameter is proposed that can be used for fatigue life prediction of superelastic NiTi alloys with different mechanical properties such as loading and unloading transformation stresses, modulus of elasticity, and austenite-to-martensite start and finish strains. Moreover, the model is capable of capturing the effects of tensile mean strain and stress on the fatigue behavior. Fatigue life predictions using the proposed damage parameter for specimens with different cyclic stress responses, tested at various strain ratios ( R ɛ = ɛ min /ɛ max) are shown to be in very good agreement with the experimentally observed fatigue lives.

  18. Goodness-of-fit test of the stratified mark-specific proportional hazards model with continuous mark.

    Science.gov (United States)

    Sun, Yanqing; Li, Mei; Gilbert, Peter B

    2016-01-01

    Motivated by the need to assess HIV vaccine efficacy, previous studies proposed an extension of the discrete competing risks proportional hazards model, in which the cause of failure is replaced by a continuous mark only observed at the failure time. However the model assumptions may fail in several ways, and no diagnostic testing procedure for this situation has been proposed. A goodness-of-fit test procedure for the stratified mark-specific proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazards depends nonparametrically on both time and the mark is proposed. The test statistics are constructed based on the weighted cumulative mark-specific martingale residuals. The critical values of the proposed test statistics are approximated using the Gaussian multiplier method. The performance of the proposed tests are examined extensively in simulations for a variety of the models under the null hypothesis and under different types of alternative models. An analysis of the 'Step' HIV vaccine efficacy trial using the proposed method is presented. The analysis suggests that the HIV vaccine candidate may increase susceptibility to HIV acquisition.

  19. Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL

    Science.gov (United States)

    Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María

    2016-04-01

    The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from

  20. Mathematical Decision Models Applied for Qualifying and Planning Areas Considering Natural Hazards and Human Dealing

    Science.gov (United States)

    Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego

    2014-05-01

    The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional

  1. Modeling Lahar Hazard Zones for Eruption-Generated Lahars from Lassen Peak, California

    Science.gov (United States)

    Robinson, J. E.; Clynne, M. A.

    2010-12-01

    Lassen Peak, a high-elevation, seasonally snow-covered peak located within Lassen Volcanic National Park, has lahar deposits in several drainages that head on or near the lava dome. This suggests that these drainages are susceptible to future lahars. The majority of the recognized lahar deposits are related to the May 19 and 22, 1915 eruptions of Lassen Peak. These small-volume eruptions generated lahars and floods when an avalanche of snow and hot rock, and a pyroclastic flow moved across the snow-covered upper flanks of the lava dome. Lahars flowed to the north down Lost Creek and Hat Creek. In Lost Creek, the lahars flowed up to 16 km downstream and deposited approximately 8.3 x 106 m3 of sediment. This study uses geologic mapping of the 1915 lahar deposits as a guide for LAHARZ modeling to assist in the assessment of present-day susceptibility for lahars in drainages heading on Lassen Peak. The LAHARZ model requires a Height over Length (H/L) energy cone controlling the initiation point of a lahar. We chose a H/L cone with a slope of 0.3 that intersects the earth’s surface at the break in slope at the base of the volcanic dome. Typically, the snow pack reaches its annual maximum by May. Average and maximum May snow-water content, a depth of water equal to 2.1 m and 3.5 m respectively, were calculated from a local snow gauge. A potential volume for individual 1915 lahars was calculated using the deposit volume, the snow-water contents, and the areas stripped of snow by the avalanche and pyroclastic flow. The calculated individual lahars in Lost Creek ranged in size from 9 x 106 m3 to 18.4 x 106 m3. These volumes modeled in LAHARZ matched the 1915 lahars remarkably well, with the modeled flows ending within 4 km of the mapped deposits. We delineated six drainage basins that head on or near Lassen Peak with the highest potential for lahar hazards: Lost Creek, Hat Creek, Manzanita Creek, Mill Creek, Warner Creek, and Bailey Creek. We calculated the area of each

  2. Soil-to-Plant Concentration Ratios for Assessing Food Chain Pathways in Biosphere Models

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.; Fellows, Robert J.; Krupka, Kenneth M.

    2007-10-01

    This report describes work performed for the U.S. Nuclear Regulatory Commission’s project Assessment of Food Chain Pathway Parameters in Biosphere Models, which was established to assess and evaluate a number of key parameters used in the food-chain models used in performance assessments of radioactive waste disposal facilities. Section 2 of this report summarizes characteristics of samples of soils and groundwater from three geographical regions of the United States, the Southeast, Northwest, and Southwest, and analyses performed to characterize their physical and chemical properties. Because the uptake and behavior of radionuclides in plant roots, plant leaves, and animal products depends on the chemistry of the water and soil coming in contact with plants and animals, water and soil samples collected from these regions of the United States were used in experiments at Pacific Northwest National Laboratory to determine radionuclide soil-to-plant concentration ratios. Crops and forage used in the experiments were grown in the soils, and long-lived radionuclides introduced into the groundwater provide the contaminated water used to water the grown plants. The radionuclides evaluated include 99Tc, 238Pu, and 241Am. Plant varieties include alfalfa, corn, onion, and potato. The radionuclide uptake results from this research study show how regional variations in water quality and soil chemistry affect radionuclide uptake. Section 3 summarizes the procedures and results of the uptake experiments, and relates the soil-to-plant uptake factors derived. In Section 4, the results found in this study are compared with similar values found in the biosphere modeling literature; the study’s results are generally in line with current literature, but soil- and plant-specific differences are noticeable. This food-chain pathway data may be used by the NRC staff to assess dose to persons in the reference biosphere (e.g., persons who live and work in an area potentially affected by

  3. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Gaunaa, Mac; Machefaux, Ewan

    2014-01-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data...

  4. Assessment of Debris Flow Potential Hazardous Zones Using Numerical Models in the Mountain Foothills of Santiago, Chile

    Science.gov (United States)

    Celis, C.; Sepulveda, S. A.; Castruccio, A.; Lara, M.

    2017-12-01

    Debris and mudflows are some of the main geological hazards in the mountain foothills of Central Chile. The risk of flows triggered in the basins of ravines that drain the Andean frontal range into the capital city, Santiago, increases with time due to accelerated urban expansion. Susceptibility assessments were made by several authors to detect the main active ravines in the area. Macul and San Ramon ravines have a high to medium debris flow susceptibility, whereas Lo Cañas, Apoquindo and Las Vizcachas ravines have a medium to low debris flow susceptibility. This study emphasizes in delimiting the potential hazardous zones using the numerical simulation program RAMMS-Debris Flows with the Voellmy model approach, and the debris-flow model LAHARZ. This is carried out by back-calculating the frictional parameters in the depositional zone with a known event as the debris and mudflows in Macul and San Ramon ravines, on May 3rd, 1993, for the RAMMS approach. In the same scenario, we calibrate the coefficients to match conditions of the mountain foothills of Santiago for the LAHARZ model. We use the information obtained for every main ravine in the study area, mainly for the similarity in slopes and material transported. Simulations were made for the worst-case scenario, caused by the combination of intense rainfall storms, a high 0°C isotherm level and material availability in the basins where the flows are triggered. The results show that the runout distances are well simulated, therefore a debris-flow hazard map could be developed with these models. Correlation issues concerning the run-up, deposit thickness and transversal areas are reported. Hence, the models do not represent entirely the complexity of the phenomenon, but they are a reliable approximation for preliminary hazard maps.

  5. Flood susceptibility analysis through remote sensing, GIS and frequency ratio model

    Science.gov (United States)

    Samanta, Sailesh; Pal, Dilip Kumar; Palsamanta, Babita

    2018-05-01

    Papua New Guinea (PNG) is saddled with frequent natural disasters like earthquake, volcanic eruption, landslide, drought, flood etc. Flood, as a hydrological disaster to humankind's niche brings about a powerful and often sudden, pernicious change in the surface distribution of water on land, while the benevolence of flood manifests in restoring the health of the thalweg from excessive siltation by redistributing the fertile sediments on the riverine floodplains. In respect to social, economic and environmental perspective, flood is one of the most devastating disasters in PNG. This research was conducted to investigate the usefulness of remote sensing, geographic information system and the frequency ratio (FR) for flood susceptibility mapping. FR model was used to handle different independent variables via weighted-based bivariate probability values to generate a plausible flood susceptibility map. This study was conducted in the Markham riverine precinct under Morobe province in PNG. A historical flood inventory database of PNG resource information system (PNGRIS) was used to generate 143 flood locations based on "create fishnet" analysis. 100 (70%) flood sample locations were selected randomly for model building. Ten independent variables, namely land use/land cover, elevation, slope, topographic wetness index, surface runoff, landform, lithology, distance from the main river, soil texture and soil drainage were used into the FR model for flood vulnerability analysis. Finally, the database was developed for areas vulnerable to flood. The result demonstrated a span of FR values ranging from 2.66 (least flood prone) to 19.02 (most flood prone) for the study area. The developed database was reclassified into five (5) flood vulnerability zones segmenting on the FR values, namely very low (less that 5.0), low (5.0-7.5), moderate (7.5-10.0), high (10.0-12.5) and very high susceptibility (more than 12.5). The result indicated that about 19.4% land area as `very high

  6. Relative Hazard Calculation Methodology

    International Nuclear Information System (INIS)

    DL Strenge; MK White; RD Stenner; WB Andrews

    1999-01-01

    The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

  7. Performance of two formal tests based on martingales residuals to check the proportional hazard assumption and the functional form of the prognostic factors in flexible parametric excess hazard models.

    Science.gov (United States)

    Danieli, Coraline; Bossard, Nadine; Roche, Laurent; Belot, Aurelien; Uhry, Zoe; Charvat, Hadrien; Remontet, Laurent

    2017-07-01

    Net survival, the one that would be observed if the disease under study was the only cause of death, is an important, useful, and increasingly used indicator in public health, especially in population-based studies. Estimates of net survival and effects of prognostic factor can be obtained by excess hazard regression modeling. Whereas various diagnostic tools were developed for overall survival analysis, few methods are available to check the assumptions of excess hazard models. We propose here two formal tests to check the proportional hazard assumption and the validity of the functional form of the covariate effects in the context of flexible parametric excess hazard modeling. These tests were adapted from martingale residual-based tests for parametric modeling of overall survival to allow adding to the model a necessary element for net survival analysis: the population mortality hazard. We studied the size and the power of these tests through an extensive simulation study based on complex but realistic data. The new tests showed sizes close to the nominal values and satisfactory powers. The power of the proportionality test was similar or greater than that of other tests already available in the field of net survival. We illustrate the use of these tests with real data from French cancer registries. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Wind vs Water in Hurricanes: The Challenge of Multi-peril Hazard Modeling

    Science.gov (United States)

    Powell, M. D.

    2017-12-01

    operational solution to collect wind and water level measurements, and to conduct observation based modeling of wind and water impacts. My presentation will discuss some of the challenges to wind and water hazard monitoring and modeling.

  9. Finite element analysis of high aspect ratio wind tunnel wing model: A parametric study

    Science.gov (United States)

    Rosly, N. A.; Harmin, M. Y.

    2017-12-01

    Procedure for designing the wind tunnel model of a high aspect ratio (HAR) wing containing geometric nonlinearities is described in this paper. The design process begins with identification of basic features of the HAR wing as well as its design constraints. This enables the design space to be narrowed down and consequently, brings ease of convergence towards the design solution. Parametric studies in terms of the spar thickness, the span length and the store diameter are performed using finite element analysis for both undeformed and deformed cases, which respectively demonstrate the linear and nonlinear conditions. Two main criteria are accounted for in the selection of the wing design: the static deflections due to gravitational loading should be within the allowable margin of the size of the wind tunnel test section and the flutter speed of the wing should be much below the maximum speed of the wind tunnel. The findings show that the wing experiences a stiffness hardening effect under the nonlinear static solution and the presence of the store enables significant reduction in linear flutter speed.

  10. Foxp3 regulates ratio of Treg and NKT cells in a mouse model of asthma.

    Science.gov (United States)

    Lu, Yanming; Guo, Yinshi; Xu, Linyun; Li, Yaqin; Cao, Lanfang

    2015-05-01

    Chronic inflammatory disorder of the airways causes asthma. Regulatory T cells (Treg cells) and Natural killer T cells (NKT cells) both play critical roles in the pathogenesis of asthma. Activation of Treg cells requires Foxp3, whereas whether Foxp3 may regulate the ratio of Treg and NKT cells to affect asthma is uncertain. In an ovalbumin (OVA)-induced mouse model of asthma, we either increased Treg cells by lentivirus-mediated forced expression of exogenous Foxp3, or increased NKT cells by stimulation with its activator α-GalCer. We found that the CD4+CD25+ Treg cells increased by forced Foxp3 expression, and decreased by α-GalCer, while the CD3+CD161+ NKT cells decreased by forced Foxp3 expression, and increased by α-GalCer. Moreover, forced Foxp3 expression, but not α-GalCer, significantly alleviated the hallmarks of asthma. Furthermore, forced Foxp3 increased levels of IL_10 and TGFβ1, and α-GalCer increased levels of IL_4 and INFγ in the OVA-treated lung. Taken together, our study suggests that Foxp3 may activate Treg cells and suppress NKT cells in asthma. Treg and NKT cells may antagonize the effects of each other in asthma.

  11. Analysis of urinary human chorionic gonadotrophin concentrations in normal and failing pregnancies using longitudinal, Cox proportional hazards and two-stage modelling.

    Science.gov (United States)

    Marriott, Lorrae; Zinaman, Michael; Abrams, Keith R; Crowther, Michael J; Johnson, Sarah

    2017-09-01

    Background Human chorionic gonadotrophin is a marker of early pregnancy. This study sought to determine the possibility of being able to distinguish between healthy and failing pregnancies by utilizing patient-associated risk factors and daily urinary human chorionic gonadotrophin concentrations. Methods Data were from a study that collected daily early morning urine samples from women trying to conceive (n = 1505); 250 of whom became pregnant. Data from 129 women who became pregnant (including 44 miscarriages) were included in these analyses. A longitudinal model was used to profile human chorionic gonadotrophin, a Cox proportional hazards model to assess demographic/menstrual history data on the time to failed pregnancy, and a two-stage model to combine these two models. Results The profile for log human chorionic gonadotrophin concentrations in women suffering miscarriage differs to that of viable pregnancies; rate of human chorionic gonadotrophin rise is slower in those suffering a biochemical loss (loss before six weeks, recognized by a rise and fall of human chorionic gonadotrophin) and tends to plateau at a lower log human chorionic gonadotrophin in women suffering an early miscarriage (loss six weeks or later), compared with viable pregnancies. Maternal age, longest cycle length and time from luteinizing hormone surge to human chorionic gonadotrophin reaching 25 mIU/mL were found to be significantly associated with miscarriage risk. The two-stage model found that for an increase of one day in the time from luteinizing hormone surge to human chorionic gonadotrophin reaching 25 mIU/mL, there is a 30% increase in miscarriage risk (hazard ratio: 1.30; 95% confidence interval: 1.04, 1.62). Conclusion Rise of human chorionic gonadotrophin in early pregnancy could be useful to predict pregnancy viability. Daily tracking of urinary human chorionic gonadotrophin may enable early identification of some pregnancies at risk of miscarriage.

  12. Introducing Geoscience Students to Numerical Modeling of Volcanic Hazards: The example of Tephra2 on VHub.org

    Directory of Open Access Journals (Sweden)

    Leah M. Courtland

    2012-07-01

    Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.

  13. A Real-Time Construction Safety Monitoring System for Hazardous Gas Integrating Wireless Sensor Network and Building Information Modeling Technologies

    Directory of Open Access Journals (Sweden)

    Weng-Fong Cheung

    2018-02-01

    Full Text Available In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN, one of the key technologies in Internet of Things (IoT development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM, a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.

  14. A Real-Time Construction Safety Monitoring System for Hazardous Gas Integrating Wireless Sensor Network and Building Information Modeling Technologies.

    Science.gov (United States)

    Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng

    2018-02-02

    In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.

  15. Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model

    NARCIS (Netherlands)

    X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)

    2016-01-01

    textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation

  16. Modelling risk in high hazard operations : Integrating technical, organisational and cultural factors

    NARCIS (Netherlands)

    Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.

    2012-01-01

    Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate

  17. Incorporating fine-scale drought information into an eastern US wildfire hazard model

    Science.gov (United States)

    Matthew P. Peters; Louis R. Iverson

    2017-01-01

    Wildfires in the eastern United States are generally caused by humans in locations where human development and natural vegetation intermingle, e.g. the wildland–urban interface (WUI). Knowing where wildfire hazards are elevated across the forested landscape may help land managers and property owners plan or allocate resources for potential wildfire threats. In an...

  18. Sex ratio selection and multi-factorial sex determination in the housefly : A dynamic model

    NARCIS (Netherlands)

    Kozielska, M.A.; Pen, I.R.; Beukeboom, L.W.; Weissing, F.J.

    Sex determining (SD) mechanisms are highly variable between different taxonomic groups and appear to change relatively quickly during evolution. Sex ratio selection could be a dominant force causing such changes. We investigate theoretically the effect of sex ratio selection on the dynamics of a

  19. Oxygen and hydrogen isotope ratios in tree rings: how well do models predict observed values?

    CSIR Research Space (South Africa)

    Waterhouse, JS

    2002-07-30

    Full Text Available Annual oxygen and hydrogen isotope ratios in the alpha-cellulose of the latewood of oak (Quercus robur L.) growing on well-drained ground in Norfolk, UK have been measured. The authors have compared the observed values of isotope ratios with those...

  20. Hazardous Drugs

    Science.gov (United States)

    ... and hazardous drugs in the workplace. Pharmacy . OSHA Hospital eTool. Reviews safety and health topics related to hazardous drugs including drug handling, administration, storage, and disposal. OSHA has identified worker exposure ...

  1. Assessment of groundwater contamination risk using hazard quantification, a modified DRASTIC model and groundwater value, Beijing Plain, China.

    Science.gov (United States)

    Wang, Junjie; He, Jiangtao; Chen, Honghan

    2012-08-15

    Groundwater contamination risk assessment is an effective tool for groundwater management. Most existing risk assessment methods only consider the basic contamination process based upon evaluations of hazards and aquifer vulnerability. In view of groundwater exploitation potentiality, including the value of contamination-threatened groundwater could provide relatively objective and targeted results to aid in decision making. This study describes a groundwater contamination risk assessment method that integrates hazards, intrinsic vulnerability and groundwater value. The hazard harmfulness was evaluated by quantifying contaminant properties and infiltrating contaminant load, the intrinsic aquifer vulnerability was evaluated using a modified DRASTIC model and the groundwater value was evaluated based on groundwater quality and aquifer storage. Two groundwater contamination risk maps were produced by combining the above factors: a basic risk map and a value-weighted risk map. The basic risk map was produced by overlaying the hazard map and the intrinsic vulnerability map. The value-weighted risk map was produced by overlaying the basic risk map and the groundwater value map. Relevant validation was completed by contaminant distributions and site investigation. Using Beijing Plain, China, as an example, thematic maps of the three factors and the two risks were generated. The thematic maps suggested that landfills, gas stations and oil depots, and industrial areas were the most harmful potential contamination sources. The western and northern parts of the plain were the most vulnerable areas and had the highest groundwater value. Additionally, both the basic and value-weighted risk classes in the western and northern parts of the plain were the highest, indicating that these regions should deserve the priority of concern. Thematic maps should be updated regularly because of the dynamic characteristics of hazards. Subjectivity and validation means in assessing the

  2. [Proportional hazards model of birth intervals among marriage cohorts since the 1960s].

    Science.gov (United States)

    Otani, K

    1987-01-01

    With a view to investigating the possibility of an attitudinal change towards the timing of 1st and 2nd births, proportional hazards model analysis of the 1st and 2nd birth intervals and univariate life table analysis were both carried out. Results showed that love matches and conjugal families immediately after marriage are accompanied by a longer 1st birth interval than others, even after controlling for other independent variables. Marriage cohort analysis also shows a net effect on the relative risk of having a 1st birth. Marriage cohorts since the mid-1960s demonstrate a shorter 1st birth interval than the 1961-63 cohort. With regard to the 2nd birth interval, longer 1st birth intervals, arranged marriages, conjugal families immediately following marriage, and higher ages at 1st marriage of women tended to provoke a longer 2nd birth interval. There is no interaction between the 1st birth interval and marriage cohort. Once other independent variables were controlled, with the exception of the marriage cohorts of the early 1970s, the authors found no effect of marriage cohort on the relative risk of having a 2nd birth. This suggests that an attitudinal change towards the timing of births in this period was mainly restricted to that of a 1st birth. Fluctuations in the 2nd birth interval during the 1970-72 marriage cohort were scrutinized in detail. As a result, the authors found that conjugal families after marriage, wives with low educational status, women with husbands in white collar professions, women with white collar fathers, and wives with high age at 1st marriage who married during 1970-72 and had a 1st birth interval during 1972-74 suffered most from the pronounced rise in the 2nd birth interval. This might be due to the relatively high sensitivity to a change in socioeconomic status; the oil crisis occurring around the time of marriage and 1st birth induced a delay in the 2nd birth. The unanimous decrease in the 2nd birth interval among the 1973

  3. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    Science.gov (United States)

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In

  4. The application of a calibrated 3D ballistic trajectory model to ballistic hazard assessments at Upper Te Maari, Tongariro

    Science.gov (United States)

    Fitzgerald, R. H.; Tsunematsu, K.; Kennedy, B. M.; Breard, E. C. P.; Lube, G.; Wilson, T. M.; Jolly, A. D.; Pawson, J.; Rosenberg, M. D.; Cronin, S. J.

    2014-10-01

    On 6 August, 2012, Upper Te Maari Crater, Tongariro volcano, New Zealand, erupted for the first time in over one hundred years. Multiple vents were activated during the hydrothermal eruption, ejecting blocks up to 2.3 km and impacting ~ 2.6 km of the Tongariro Alpine Crossing (TAC) hiking track. Ballistic impact craters were mapped to calibrate a 3D ballistic trajectory model for the eruption. This was further used to inform future ballistic hazard. Orthophoto mapping revealed 3587 impact craters with a mean diameter of 2.4 m. However, field mapping of accessible regions indicated an average of at least four times more observable impact craters and a smaller mean crater diameter of 1.2 m. By combining the orthophoto and ground-truthed impact frequency and size distribution data, we estimate that approximately 13,200 ballistic projectiles were generated during the eruption. The 3D ballistic trajectory model and a series of inverse models were used to constrain the eruption directions, angles and velocities. When combined with eruption observations and geophysical observations, the model indicates that the blocks were ejected in five variously directed eruption pulses, in total lasting 19 s. The model successfully reproduced the mapped impact distribution using a mean initial particle velocity of 200 m/s with an accompanying average gas flow velocity over a 400 m radius of 150 m/s. We apply the calibrated model to assess ballistic hazard from the August eruption along the TAC. By taking the field mapped spatial density of impacts and an assumption that an average ballistic impact will cause serious injury or death (casualty) over an 8 m2 area, we estimate that the probability of casualty ranges from 1% to 16% along the affected track (assuming an eruption during the time of exposure). Future ballistic hazard and probabilities of casualty along the TAC are also assessed through application of the calibrated model. We model a magnitude larger eruption and illustrate

  5. Optimal energy-utilization ratio for long-distance cruising of a model fish

    Science.gov (United States)

    Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang

    2012-07-01

    The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.

  6. Modeling retrospective attribution of responsibility to hazard-managing institutions: an example involving a food contamination incident.

    Science.gov (United States)

    Johnson, Branden B; Hallman, William K; Cuite, Cara L

    2015-03-01

    Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.

  7. Evaluating a multi-criteria model for hazard assessment in urban design. The Porto Marghera case study

    International Nuclear Information System (INIS)

    Luria, Paolo; Aspinall, Peter A.

    2003-01-01

    The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based on a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)

  8. A Comparison Study of Return Ratio-Based Academic Enrollment Forecasting Models. Professional File. Article 129, Spring 2013

    Science.gov (United States)

    Zan, Xinxing Anna; Yoon, Sang Won; Khasawneh, Mohammad; Srihari, Krishnaswami

    2013-01-01

    In an effort to develop a low-cost and user-friendly forecasting model to minimize forecasting error, we have applied average and exponentially weighted return ratios to project undergraduate student enrollment. We tested the proposed forecasting models with different sets of historical enrollment data, such as university-, school-, and…

  9. Modelling the influence of the gas to melt ratio on the fraction solid of the surface in spray formed billets

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Pryds, Nini

    2006-01-01

    In this paper, the relationship between the Gas to Melt Ratio (GMR) and the solid fraction of an evolving billet surface is investigated numerically. The basis for the analysis is a recently developed integrated procedure for modelling the entire spray forming process. This model includes the ato...

  10. The Herfa-Neurode hazardous waste repository in bedded salt as an operating model for safe mixed waste disposal

    International Nuclear Information System (INIS)

    Rempe, N.T.

    1991-01-01

    For 18 years, The Herfa-Neurode underground repository has demonstrated the environmentally sound disposal of hazardous waste in a former potash mine. Its principal characteristics make it an excellent analogue to the Waste Isolation Pilot Plant (WIPP). The Environmental Protection Agency has ruled in its first conditional no-migration determination that is reasonably certain that no hazardous constituents of the mixed waste, destined for the WIPP during its test phase, will migrate from the site for up to ten years. Knowledge of and reference to the Herfa-Neurode operating model may substantially improve the no-migration variance petition for the WIPP's disposal phase and thereby expedite its approval. 2 refs., 1 fig., 1 tab

  11. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J

    2008-02-11

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.

  12. Evaluation of the product ratio coherent model in forecasting mortality rates and life expectancy at births by States

    Science.gov (United States)

    Shair, Syazreen Niza; Yusof, Aida Yuzi; Asmuni, Nurin Haniah

    2017-05-01

    Coherent mortality forecasting models have recently received increasing attention particularly in their application to sub-populations. The advantage of coherent models over independent models is the ability to forecast a non-divergent mortality for two or more sub-populations. One of the coherent models was recently developed by [1] known as the product-ratio model. This model is an extension version of the functional independent model from [2]. The product-ratio model has been applied in a developed country, Australia [1] and has been extended in a developing nation, Malaysia [3]. While [3] accounted for coherency of mortality rates between gender and ethnic group, the coherency between states in Malaysia has never been explored. This paper will forecast the mortality rates of Malaysian sub-populations according to states using the product ratio coherent model and its independent version— the functional independent model. The forecast accuracies of two different models are evaluated using the out-of-sample error measurements— the mean absolute forecast error (MAFE) for age-specific death rates and the mean forecast error (MFE) for the life expectancy at birth. We employ Malaysian mortality time series data from 1991 to 2014, segregated by age, gender and states.

  13. LISREL Model Medical Solid Infectious Waste Hazardous Hospital Management In Medan City

    Science.gov (United States)

    Simarmata, Verawaty; Siahaan, Ungkap; Pandia, Setiaty; Mawengkang, Herman

    2018-01-01

    Hazardous and toxic waste resulting from activities at most hospitals contain various elements of medical solid waste ranging from heavy metals that have the nature of accumulative toxic which are harmful to human health. Medical waste in the form of gas, liquid or solid generally include the category or the nature of the hazard and toxicity waste. The operational in activities of the hospital aims to improve the health and well-being, but it also produces waste as an environmental pollutant waters, soil and gas. From the description of the background of the above in mind that the management of solid waste pollution control medical hospital, is one of the fundamental problems in the city of Medan and application supervision is the main business licensing and control alternatives in accordance with applicable regulations.

  14. Hazard Identification of the Offshore Three-phase Separation Process Based on Multilevel Flow Modeling and HAZOP

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Lind, Morten

    2013-01-01

    HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of the systems. Different tools have been developed to automate HAZOP studies. In this paper, a HAZOP reasoning method based...... on function-oriented modeling, Multilevel Flow Modeling (MFM), is extended with function roles. A graphical MFM editor, which is combined with the reasoning capabilities of the MFM Workbench developed by DTU is applied to automate HAZOP studies. The method is proposed to support the “brain-storming” sessions...

  15. Hazard assessment of the Gschliefgraben earth flow (Austria) based on monitoring data and evolution modelling

    Science.gov (United States)

    Poisel, R.; Preh, A.; Hofmann, R.; Schiffer, M.; Sausgruber, Th.

    2009-04-01

    A rock slide on to the clayey - silty - sandy - pebbly masses in the Gschliefgraben (Upper Austria province, Lake Traunsee) having occurred in 2006 as well as the humid autumn of 2007 triggered an earth flow comprising a volume up to 5 mill m³ and moving with a maximum displacement velocity of 5 m/day during the winter of 2007-2008. The possible damage was estimated up to 60 mill € due to possible destruction of houses and of a road to a settlement with heavy tourism. Exploratory drillings revealed that the moving mass consists of an alternate bedding of thicker, less permeable clayey - silty layers and thinner, more permeable silty - sandy - pebbly layers. The movement front ran ahead in the creek bed. Therefore it was assumed that water played an important role and the earth flow moved due to soaking of water into the ground from the area of the rock slide downslope. Inclinometer measurements showed that the uppermost, less permeable layer was sliding on a thin, more permeable layer. The movement process was analysed by numerical models (FLAC) and by conventional calculations in order to assess the hazard. The coupled flow and mechanical models showed that sections of the less permeable layer soaked with water were sliding on the thin, more permeable layer due to excessive watering out of the more permeable layer. These sections were thrust over the downward lying, less soaked areas, therefore having higher strength. The material thrust over the downward lying, less soaked areas together with the moving front of pore water pressures caused the downward material to fail and to be thrust over the downslope lying material in a distance of some 50 m. Thus a cyclic process was created without any indication of a sudden sliding of the complete less permeable layer. Nevertheless, the inhabitants of 15 houses had to be evacuated for safety reasons. They could return to their homes after displacement velocities had decreased. Displacement monitoring by GPS showed that

  16. Modeling Flood Hazard Zones at the Sub-District Level with the Rational Model Integrated with GIS and Remote Sensing Approaches

    Directory of Open Access Journals (Sweden)

    Daniel Asare-Kyei

    2015-07-01

    Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.

  17. Modal Damping Ratio and Optimal Elastic Moduli of Human Body Segments for Anthropometric Vibratory Model of Standing Subjects.

    Science.gov (United States)

    Gupta, Manoj; Gupta, T C

    2017-10-01

    The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.

  18. Rockfall hazard assessment by coupling three-dimensional, process based models and field-based tree-ring data

    Science.gov (United States)

    Trappmann, Daniel; Stoffel, Markus; Corona, Christophe

    2014-05-01

    A realistic evaluation of the spatial and temporal patterns of rockfalls is fundamental for the management of this very common hazard in mountain environments. Process-based, three-dimensional simulation models are nowadays capable to reproduce the spatial probability of rockfalls with reasonable accuracy through the simulation of numerous individual trajectories on highly-resolved digital terrain models. At the same time, however, simulation models typically fail to quantify the real frequency of rockfalls. The analysis of impact scars on trees, in contrast, yields empirical rockfall frequencies but, trees may not be present at the location of interest and rare trajectories may not necessarily be captured due to the limited age of forest stands on rockfall slopes. In this article, we demonstrate that the coupling of modeling with tree-ring techniques may overcome the limitations inherent to both approaches. Based on the analysis of 64 cells (40 × 40 m) of a rockfall slope located above a 1631-m long road section in the Swiss Alps, we illustrate results from 488 rockfalls detected in 1260 trees. We illustrate that tree impact data cannot only be used (i) to reconstruct the frequency of rockfalls for individual cells, but that they also serve (ii) the calibration of the rockfall model Rockyfor3D, as well as (iii) the transformation of simulated trajectories into real empirical frequencies. Calibrated simulation results are in good agreement with empirical rockfall frequencies and exhibit significant differences in rockfall activity between the cells (zones) along the road section. Empirical frequencies, expressed as rock passages per meter road section, also enable quantification and direct comparison of the hazard potential between the zones. The contribution provides an approach for hazard zoning procedures that complements traditional methods with a quantification of rockfall frequencies through a systematic inclusion of impact records in trees.

  19. A fast global tsunami modeling suite as a trans-oceanic tsunami hazard prediction and mitigation tool

    Science.gov (United States)

    Mohammed, F.; Li, S.; Jalali Farahani, R.; Williams, C. R.; Astill, S.; Wilson, P. S.; B, S.; Lee, R.

    2014-12-01

    The past decade has been witness to two mega-tsunami events, 2004 Indian ocean tsunami and 2011 Japan tsunami and multiple major tsunami events; 2006 Java, Kuril Islands, 2007 Solomon Islands, 2009 Samoa and 2010 Chile, to name a few. These events generated both local and far field tsunami inundations with runup ranging from a few meters to around 40 m in the coastal impact regions. With a majority of the coastal population at risk, there is need for a sophisticated outlook towards catastrophe risk estimation and a quick mitigation response. At the same time tools and information are needed to aid advanced tsunami hazard prediction. There is an increased need for insurers, reinsurers and Federal hazard management agencies to quantify coastal inundations and vulnerability of coastal habitat to tsunami inundations. A novel tool is developed to model local and far-field tsunami generation, propagation and inundation to estimate tsunami hazards. The tool is a combination of the NOAA MOST propagation database and an efficient and fast GPU (Graphical Processing Unit)-based non-linear shallow water wave model solver. The tsunamigenic seismic sources are mapped on to the NOAA unit source distribution along subduction zones in the ocean basin. Slip models are defined for tsunamigenic seismic sources through a slip distribution on the unit sources while maintaining limits of fault areas. A GPU based finite volume solver is used to simulate non-linear shallow water wave propagation, inundation and runup. Deformation on the unit sources provide initial conditions for modeling local impacts, while the wave history from propagation database provides boundary conditions for far field impacts. The modeling suite provides good agreement with basins for basin wide tsunami propagation to validate local and far field tsunami inundations.

  20. Performance analysis of wind turbines at low tip-speed ratio using the Betz-Goldstein model

    International Nuclear Information System (INIS)

    Vaz, Jerson R.P.; Wood, David H.

    2016-01-01

    Highlights: • General formulations for power and thrust at any tip-speed ratio are developed. • The Joukowsky model for the blades is modified with specific vortex distributions. • Betz-Goldstein model is shown to be the most consistent at low tip-speed ratio. • The effects of finite blade number are assessed using tip loss factors. • Tip loss for finite blade number may complicate the vortex breakdown. - Abstract: Analyzing wind turbine performance at low tip-speed ratio is challenging due to the relatively high level of swirl in the wake. This work presents a new approach to wind turbine analysis including swirl for any tip-speed ratio. The methodology uses the induced velocity field from vortex theory in the general momentum theory, in the form of the turbine thrust and torque equations. Using the constant bound circulation model of Joukowsky, the swirl velocity becomes infinite on the wake centreline even at high tip-speed ratio. Rankine, Vatistas and Delery vortices were used to regularize the Joukowsky model near the centreline. The new formulation prevents the power coefficient from exceeding the Betz-Joukowsky limit. An alternative calculation, based on the varying circulation for Betz-Goldstein optimized rotors is shown to have the best general behavior. Prandtl’s approximation for the tip loss and a recent alternative were employed to account for the effects of a finite number of blades. The Betz-Goldstein model appears to be the only one resistant to vortex breakdown immediately behind the rotor for an infinite number of blades. Furthermore, the dependence of the induced velocity on radius in the Betz-Goldstein model allows the power coefficient to remain below Betz-Joukowsky limit which does not occur for the Joukowsky model at low tip-speed ratio.

  1. Effectiveness of water infrastructure for river flood management – Part 1: Flood hazard assessment using hydrological models in Bangladesh

    Directory of Open Access Journals (Sweden)

    M. A. Gusyev

    2015-06-01

    Full Text Available This study introduces a flood hazard assessment part of the global flood risk assessment (Part 2 conducted with a distributed hydrological Block-wise TOP (BTOP model and a GIS-based Flood Inundation Depth (FID model. In this study, the 20 km grid BTOP model was developed with globally available data on and applied for the Ganges, Brahmaputra and Meghna (GBM river basin. The BTOP model was calibrated with observed river discharges in Bangladesh and was applied for climate change impact assessment to produce flood discharges at each BTOP cell under present and future climates. For Bangladesh, the cumulative flood inundation maps were produced using the FID model with the BTOP simulated flood discharges and allowed us to consider levee effectiveness for reduction of flood inundation. For the climate change impacts, the flood hazard increased both in flood discharge and inundation area for the 50- and 100-year floods. From these preliminary results, the proposed methodology can partly overcome the limitation of the data unavailability and produces flood~maps that can be used for the nationwide flood risk assessment, which is presented in Part 2 of this study.

  2. Landslide Hazard Assessment and Mapping in the Guil Catchment (Queyras, Southern French Alps): From Landslide Inventory to Susceptibility Modelling

    Science.gov (United States)

    Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique

    2016-04-01

    Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling

  3. Seismic hazard in the eastern United States

    Science.gov (United States)

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  4. Come rain or shine: Multi-model Projections of Climate Hazards affecting Transportation in the South Central United States

    Science.gov (United States)

    Mullens, E.; Mcpherson, R. A.

    2016-12-01

    This work develops detailed trends in climate hazards affecting the Department of Transportation's Region 6, in the South Central U.S. Firstly, a survey was developed to gather information regarding weather and climate hazards in the region from the transportation community, identifying key phenomena and thresholds to evaluate. Statistically downscaled datasets were obtained from the Multivariate Adaptive Constructed Analogues (MACA) project, and the Asynchronous Regional Regression Model (ARRM), for a total of 21 model projections, two coupled model intercomparisons (CMIP3, and CMIP5), and four emissions pathways (A1Fi, B1, RCP8.5, RCP4.5). Specific hazards investigated include winter weather, freeze-thaw cycles, hot and cold extremes, and heavy precipitation. Projections for each of these variables were calculated for the region, utilizing spatial mapping, and time series analysis at the climate division level. The results indicate that cold-season phenomena such as winter weather, freeze-thaw, and cold extremes, decrease in intensity and frequency, particularly with the higher emissions pathways. Nonetheless, specific model and downscaling method yields variability in magnitudes, with the most notable decreasing trends late in the 21st century. Hot days show a pronounced increase, particularly with greater emissions, producing annual mean 100oF day frequencies by late 21st century analogous to the 2011 heatwave over the central Southern Plains. Heavy precipitation, evidenced by return period estimates and counts-over-thresholds, also show notable increasing trends, particularly between the recent past through mid-21st Century. Conversely, mean precipitation does not show significant trends and is regionally variable. Precipitation hazards (e.g., winter weather, extremes) diverge between downscaling methods and their associated model samples much more substantially than temperature, suggesting that the choice of global model and downscaled data is particularly

  5. A hazard-based duration model for analyzing crossing behavior of cyclists and electric bike riders at signalized intersections.

    Science.gov (United States)

    Yang, Xiaobao; Huan, Mei; Abdel-Aty, Mohamed; Peng, Yichuan; Gao, Ziyou

    2015-01-01

    This paper presents a hazard-based duration approach to investigate riders' waiting times, violation hazards, associated risk factors, and their differences between cyclists and electric bike riders at signalized intersections. A total of 2322 two-wheeled riders approaching the intersections during red light periods were observed in Beijing, China. The data were classified into censored and uncensored data to distinguish between safe crossing and red-light running behavior. The results indicated that the red-light crossing behavior of most riders was dependent on waiting time. They were inclined to terminate waiting behavior and run against the traffic light with the increase of waiting duration. Over half of the observed riders cannot endure 49s or longer. 25% of the riders can endure 97s or longer. Rider type, gender, waiting position, conformity tendency and crossing traffic volume were identified to have significant effects on riders' waiting times and violation hazards. Electric bike riders were found to be more sensitive to the external risk factors such as other riders' crossing behavior and crossing traffic volume than cyclists. Moreover, unobserved heterogeneity was examined in the proposed models. The finding of this paper can explain when and why cyclists and electric bike riders run against the red light at intersections. The results of this paper are useful for traffic design and management agencies to implement strategies to enhance the safety of riders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  7. Inconsistency of ammonium-sulfate aerosol ratios with thermodynamic models in the eastern US: a possible role of organic aerosol

    Science.gov (United States)

    Silvern, Rachel F.; Jacob, Daniel J.; Kim, Patrick S.; Marais, Eloise A.; Turner, Jay R.; Campuzano-Jost, Pedro; Jimenez, Jose L.

    2017-04-01

    Thermodynamic models predict that sulfate aerosol (S(VI) ≡ H2SO4(aq) + HSO4-+ SO42-) should take up available ammonia (NH3) quantitatively as ammonium (NH4+) until the ammonium sulfate stoichiometry (NH4)2SO4 is close to being reached. This uptake of ammonia has important implications for aerosol mass, hygroscopicity, and acidity. When ammonia is in excess, the ammonium-sulfate aerosol ratio R = [NH4+] / [S(VI)] should approach 2, with excess ammonia remaining in the gas phase. When ammonia is in deficit, it should be fully taken up by the aerosol as ammonium and no significant ammonia should remain in the gas phase. Here we report that sulfate aerosol in the eastern US in summer has a low ammonium-sulfate ratio despite excess ammonia, and we show that this is at odds with thermodynamic models. The ammonium-sulfate ratio averages only 1.04 ± 0.21 mol mol-1 in the Southeast, even though ammonia is in large excess, as shown by the ammonium-sulfate ratio in wet deposition and by the presence of gas-phase ammonia. It further appears that the ammonium-sulfate aerosol ratio is insensitive to the supply of ammonia, remaining low even as the wet deposition ratio exceeds 6 mol mol-1. While the ammonium-sulfate ratio in wet deposition has increased by 5.8 % yr-1 from 2003 to 2013 in the Southeast, consistent with SO2 emission controls, the ammonium-sulfate aerosol ratio decreased by 1.4-3.0 % yr-1. Thus, the aerosol is becoming more acidic even as SO2 emissions decrease and ammonia emissions stay constant; this is incompatible with simple sulfate-ammonium thermodynamics. A tentative explanation is that sulfate particles are increasingly coated by organic material, retarding the uptake of ammonia. Indeed, the ratio of organic aerosol (OA) to sulfate in the Southeast increased from 1.1 to 2.4 g g-1 over the 2003-2013 period as sulfate decreased. We implement a simple kinetic mass transfer limitation for ammonia uptake to sulfate aerosols in the GEOS-Chem chemical transport

  8. Seismic hazard of the Kivu rift (western branch, East African Rift system): new neotectonic map and seismotectonic zonation model

    Science.gov (United States)

    Delvaux, Damien; Mulumba, Jean-Luc; Sebagenzi Mwene Ntabwoba, Stanislas; Fiama Bondo, Silvanos; Kervyn, François; Havenith, Hans-Balder

    2017-04-01

    The first detailed probabilistic seismic hazard assessment has been performed for the Kivu and northern Tanganyika rift region in Central Africa. This region, which forms the central part of the Western Rift Branch, is one of the most seismically active part of the East African rift system. It was already integrated in large scale seismic hazard assessments, but here we defined a finer zonation model with 7 different zones representing the lateral variation of the geological and geophysical setting across the region. In order to build the new zonation model, we compiled homogeneous cross-border geological, neotectonic and sismotectonic maps over the central part of East D.R. Congo, SW Uganda, Rwanda, Burundi and NW Tanzania and defined a new neotectonic sheme. The seismic risk assessment is based on a new earthquake catalogue, compiled on the basis of various local and global earthquake catalogues. The use of macroseismic epicenters determined from felt earthquakes allowed to extend the time-range back to the beginning of the 20th century, spanning 126 years, with 1068 events. The magnitudes have been homogenized to Mw and aftershocks removed. From this initial catalogue, a catalogue of 359 events from 1956 to 2015 and with M > 4.4 has been extracted for the seismic hazard assessment. The seismotectonic zonation includes 7 seismic source areas that have been defined on the basis of the regional geological structure, neotectonic fault systems, basin architecture and distribution of thermal springs and earthquake epicenters. The Gutenberg-Richter seismic hazard parameters were determined using both the least square linear fit and the maximum likelihood method (Kijko & Smit aue program). Seismic hazard maps have been computed with the Crisis 2012 software using 3 different attenuation laws. We obtained higher PGA values (475 years return period) for the Kivu rift region than the previous estimates (Delvaux et al., 2016). They vary laterally in function of the tectonic

  9. Of Modeling the Radiation Hazards Along Trajectory Space Vehicles Various Purpose

    Science.gov (United States)

    Grichshenko, Valentina

    2016-07-01

    The paper discusses the results of the simulation of radiation hazard along trajectory low-orbit spacecraft for various purposes, geostationary and navigation satellites. Developed criteria of reliability of memory cells in Space, including influence of cosmic rays (CR), differences of geophysical and geomagnetic situation on SV orbit are discussed. Numerical value of vertical geomagnetic stiffness, of CR flux and assessment of correlation failures of memory cells along low-orbit spacecrafts trajectory are presented. Obtained results are used to forecasting the radiation situation along SV orbit, reliability of memory cells in the Space and to optimize nominal equipment kit and payload of Kazakhstan SV.

  10. Comparative study on DuPont analysis and DEA models for measuring stock performance using financial ratio

    Science.gov (United States)

    Arsad, Roslah; Shaari, Siti Nabilah Mohd; Isa, Zaidi

    2017-11-01

    Determining stock performance using financial ratio is challenging for many investors and researchers. Financial ratio can indicate the strengths and weaknesses of a company's stock performance. There are five categories of financial ratios namely liquidity, efficiency, leverage, profitability and market ratios. It is important to interpret the ratio correctly for proper financial decision making. The purpose of this study is to compare the performance of listed companies in Bursa Malaysia using Data Envelopment Analysis (DEA) and DuPont analysis Models. The study is conducted in 2015 involving 116 consumer products companies listed in Bursa Malaysia. The estimation method of Data Envelopment Analysis computes the efficiency scores and ranks the companies accordingly. The Alirezaee and Afsharian's method of analysis based Charnes, Cooper and Rhodes (CCR) where Constant Return to Scale (CRS) is employed. The DuPont analysis is a traditional tool for measuring the operating performance of companies. In this study, DuPont analysis is used to evaluate three different aspects such as profitability, efficiency of assets utilization and financial leverage. Return on Equity (ROE) is also calculated in DuPont analysis. This study finds that both analysis models provide different rankings of the selected samples. Hypothesis testing based on Pearson's correlation, indicates that there is no correlation between rankings produced by DEA and DuPont analysis. The DEA ranking model proposed by Alirezaee and Asharian is unstable. The method cannot provide complete ranking because the values of Balance Index is equal and zero.

  11. Using Ontologies to Support Model-based Exploration of the Dependencies between Causes and Consequences of Hazards

    OpenAIRE

    Bloomfield, R. E.; Parisaca-Vargas, A.

    2015-01-01

    Hazard identification and hazard analysis are difficult and essential parts of safety engineering. These activities are very demanding and mostly manual. There is an increasing need for improved analysis tools and techniques. In this paper we report research that focuses on supporting the early stages of hazard identification. A state-based hazard analysis process is presented to explore dependencies between causes and consequences of hazards. The process can be used to automate the analysis ...

  12. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  13. Modeling investigation of controlling factors in the increasing ratio of nitrate to non-seasalt sulfate in precipitation over Japan

    Science.gov (United States)

    Itahashi, Syuichi; Uno, Itsushi; Hayami, Hiroshi; Fujita, Shin-ichi

    2014-08-01

    Anthropogenic emissions in East Asia have been increasing during the three decades since 1980, as the population of East Asia has grown and the economies in East Asian countries have expanded. This has been particularly true in China, where NOx emissions have been rising continuously. However, because of fuel-gas desulfurization systems introduced as part of China’s 11th Five-Year Plan (2006-2010), SO2 emissions in China reached a peak in 2005-2006 and have declined since then. These drastic changes in emission levels of acidifying species are likely to have caused substantial changes in the precipitation chemistry. The absolute concentration of compounds in precipitation is inherently linked to precipitation amount; therefore, we use the ratio of nitrate (NO) to non-seasalt sulfate (nss-SO2-) concentration in precipitation as an index for evaluating acidification, which we call Ratio. In this study, we analyzed the long-term behavior of Ratio in precipitation over the Japanese archipelago during 2000-2011 and estimated the factors responsible for changes in Ratio in precipitation by using a model simulation. This analysis showed that Ratio was relatively constant at 0.5-0.6 between 2000 and 2005, and subsequently increased to 0.6-0.7 between 2006 and 2011. These changes in Ratio corresponded remarkably well to the changes of NOx/SO2 emissions ratio in China; this correspondence suggests that anthropogenic emissions from China were responsible for most of the change in precipitation chemistry over Japan. Sensitivity analysis elucidated that the increase in NOx emissions and the decrease in SO2 emissions contributed equally to the increases in Ratio. Considering both emission changes in China enables to capture the observed increasing trend of Ratio in Japan.

  14. Snow-avalanche modeling and hazard level assessment using statistical and physical modeling, DSS and WebGIS: case study from Czechia

    Science.gov (United States)

    Blahut, J.; Balek, J.; Juras, R.; Klimes, J.; Klose, Z.; Roubinek, J.; Pavlasek, J.

    2014-12-01

    Snow-avalanche modeling and hazard level assessment are important issues to be solved within mountain regions worldwide. In Czechia, there are two mountain ranges (Krkonoše and Jeseníky Mountains), which suffer from regular avalanche activity every year. Mountain Rescue Service is responsible for issuing avalanche bulletins. However, its approaches are still lacking objective assessments and procedures for hazard level estimations. This lack is mainly caused by missing expert avalanche information system. This paper presents preliminary results from a project funded by the Ministry of Interior of the Czech Republic. This project is focused on development of an information system for snow-avalanche hazard level forecasting. It is composed of three main modules, which should act as a Decision Support System (DSS) for the Mountain Rescue Service. Firstly, snow-avalanche susceptibility model is used for delimiting areas where avalanches can occur based on accurate statistical analyses. For that purpose a waste database is used, containing more than 1100 avalanche events from 1961/62 till present. Secondly, a physical modeling of the avalanches is being performed on avalanche paths using RAMMS modeling code. Regular paths, where avalanches occur every year, and irregular paths are being assessed. Their footprint is being updated using return period information for each path. Thirdly, snow distribution and stability models (distributed HBV-ETH, Snowtran 3D, Snowpack and Alpine 3D) are used to assess the critical conditions for avalanche release. For calibration of the models data about meteo/snow cover data and snowpits is used. Those three parts are being coupled in a WebGIS platform used as the principal component of the DSS in snow-avalanche hazard level assessment.

  15. Planning ahead for asteroid and comet hazard mitigation, phase 1: parameter space exploration and scenario modeling

    Energy Technology Data Exchange (ETDEWEB)

    Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory

    2009-01-01

    The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.

  16. Modeling hydrologic and geomorphic hazards across post-fire landscapes using a self-organizing map approach

    Science.gov (United States)

    Friedel, Michael J.

    2011-01-01

    Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.

  17. Production ecology of agroforestry systems: A minimal mechanistic model and analytical derivation of the land equivalent ratio

    NARCIS (Netherlands)

    Keesman, K.J.; Werf, van der W.; Keulen, van H.

    2007-01-01

    In this paper, the yield and the land equivalent ratio (LER) of a silvo-arable agroforestry (SAF) system, containing one tree and one crop species, is analyzed analytically using a minimal mechanistic model describing the system dynamics. Light competition between tree and crop is considered using

  18. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    Bolck, A.; Ni, H.; Lopatka, M.

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are

  19. CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.

    1993-06-01

    CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.

  20. Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach

    Science.gov (United States)

    Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur

    2010-01-01

    A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve

  1. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    Science.gov (United States)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in

  2. Evaluation of reference tissue model and tissue ratio method for 5-HTT using [(123)I] ADAM tracer.

    Science.gov (United States)

    Yang, Bang-Hung; Wang, Shyh-Jen; Chou, Yuan-Hwa; Su, Tung-Ping; Chen, Shih-Pei; Lee, Jih-Shian; Chen, Jyh-Cheng

    2008-12-01

    The serotonin (5-hydroxytryptamine, or 5-HT) transporters (5-HTT) are target-sites for commonly used antidepressants. [(123)I] ADAM is a novel radiotracer that selectively binds the 5-HTT of the central nervous system. The aim for this study was to compare four-parameter model (FPM) with three-parameter model (TPM) from non-invasive reference tissue model (RTM) for 5-HTT quantification using the cerebellum as indirect input function. Furthermore, we compared tracer kinetic model with the tissue ratio (TR) method. The binding potential (BP) values derived from both models were almost the same, but ratio of delivery (R(1)) in TPM had smaller standard deviation than the FPM. There was also significant correlation between BP and specific uptake ratio (SUR). In conclusion, simplified reference tissue model (SRTM) was the better choice because of its stability and convenient implementation for non-invasive quantification of brain SPECT studies. The correlation found between BP and SUR supports the use of TR method for quantification of 5-HTT to avoid arterial sampling in dynamic SPECT scan.

  3. CAirTOX, An inter-media transfer model for assessing indirect exposures to hazardous air contaminants

    International Nuclear Information System (INIS)

    McKone, T.E.

    1994-01-01

    Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out

  4. Reproductive Hazards

    Science.gov (United States)

    ... and the ability to have children. Something that affects reproductive health is called a reproductive hazard. Examples include: Radiation Metals such as lead and mercury Chemicals such as pesticides Cigarettes Some viruses Alcohol For men, a reproductive hazard can affect the ...

  5. Forecasting Flood Hazard on Real Time: Implementation of a New Surrogate Model for Hydrometeorological Events in an Andean Watershed.

    Science.gov (United States)

    Contreras Vargas, M. T.; Escauriaza, C. R.; Westerink, J. J.

    2017-12-01

    In recent years, the occurrence of flash floods and landslides produced by hydrometeorological events in Andean watersheds has had devastating consequences in urban and rural areas near the mountains. Two factors have hindered the hazard forecast in the region: 1) The spatial and temporal variability of climate conditions, which reduce the time range that the storm features can be predicted; and 2) The complexity of the basin morphology that characterizes the Andean region, and increases the velocity and the sediment transport capacity of flows that reach urbanized areas. Hydrodynamic models have become key tools to assess potential flood risks. Two-dimensional (2D) models based on the shallow-water equations are widely used to determine with high accuracy and resolution, the evolution of flow depths and velocities during floods. However, the high-computational requirements and long computational times have encouraged research to develop more efficient methodologies for predicting the flood propagation on real time. Our objective is to develop new surrogate models (i.e. metamodeling) to quasi-instantaneously evaluate floods propagation in the Andes foothills. By means a small set of parameters, we define storms for a wide range of meteorological conditions. Using a 2D hydrodynamic model coupled in mass and momentum with the sediment concentration, we compute on high-fidelity the propagation of a flood set. Results are used as a database to perform sophisticated interpolation/regression, and approximate efficiently the flow depth and velocities in critical points during real storms. This is the first application of surrogate models to evaluate flood propagation in the Andes foothills, improving the efficiency of flood hazard prediction. The model also opens new opportunities to improve early warning systems, helping decision makers to inform citizens, enhancing the reslience of cities near mountain regions. This work has been supported by CONICYT/FONDAP grant

  6. {E2}/{M1} ratio for the γN → Δ transition in the chiral quark soliton model

    Science.gov (United States)

    Watabe, T.; Christov, Chr. V.; Goeke, K.

    1995-02-01

    We calculate the electric quadrupole to magnetic dipole transition ratio {E2}/{M1} for the reaction γN → Δ (1232) in the chiral quark soliton model. The calculated {E2}/{M1} ratio is in a good agreement with the very new experimental data. We obtain non-zero negative value for the electric quadrupole N - Δ transition moment, which suggests an oblate deformed charge structure of the nucleon or/and the delta isobar. Other observables related to this quantity, namely the N - Δ mass splitting, the isovector charge radius, and isovector magnetic moment, are properly reproduced as well.

  7. Improving Stiffness-to-weight Ratio of Spot-welded Structures based upon Nonlinear Finite Element Modelling

    Science.gov (United States)

    Zhang, Shengyong

    2017-07-01

    Spot welding has been widely used for vehicle body construction due to its advantages of high speed and adaptability for automation. An effort to increase the stiffness-to-weight ratio of spot-welded structures is investigated based upon nonlinear finite element analysis. Topology optimization is conducted for reducing weight in the overlapping regions by choosing an appropriate topology. Three spot-welded models (lap, doubt-hat and T-shape) that approximate “typical” vehicle body components are studied for validating and illustrating the proposed method. It is concluded that removing underutilized material from overlapping regions can result in a significant increase in structural stiffness-to-weight ratio.

  8. Circular depolarization ratios of single water droplets and finite ice circular cylinders: a modeling study

    Directory of Open Access Journals (Sweden)

    M. Nicolet

    2012-05-01

    Full Text Available Computations of the phase matrix elements for single water droplets and ice crystals in fixed orientations are presented to determine if circular depolarization δC is more accurate than linear depolarization for phase discrimination. T-matrix simulations were performed to calculate right-handed and left-handed circular depolarization ratios δ+C, respectively δ−C and to compare them with linear ones. Ice crystals are assumed to have a circular cylindrical shape where their surface-equivalent diameters range up to 5 μm. The circular depolarization ratios of ice particles were generally higher than linear depolarization and depended mostly on the particle orientation as well as their sizes. The fraction of non-detectable ice crystals (δ<0.05 was smaller considering a circular polarized light source, reaching 4.5%. However, water droplets also depolarized light circularly for scattering angles smaller than 179° and size parameters smaller than 6 at side- and backscattering regions. Differentiation between ice crystals and water droplets might be difficult for experiments performed at backscattering angles which deviate from 180° unlike LIDAR applications. Instruments exploiting the difference in the P44/P11 ratio at a scattering angle around 115° are significantly constrained in distinguishing between water and ice because small droplets with size parameters between 5 and 10 do cause very high circular depolarizations at this angle. If the absence of the liquid phase is confirmed, the use of circular depolarization in single particle detection is more sensitive and less affected by particle orientation.

  9. Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.

    Science.gov (United States)

    Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N

    2009-02-01

    Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p Myanmar, Lao PDR, Thailand and Yunnan Province, China). These countries also have actively and historically mined Zn, Pb, and Cu deposits where Cd is likely to be a potential hazard if un-controlled discharge/runoff enters areas of rice cultivation. As such, it is envisaged that the Irr-Cad model could be applied for Cd hazard assessment and effectively form the basis of intervention options and policy decisions to protect public health, livelihoods, and export security.

  10. Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps

    Science.gov (United States)

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.

    2014-01-01

    The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.

  11. A model for determining total ketogenic ratio (TKR) for evaluating the ketogenic property of a weight-reduction diet.

    Science.gov (United States)

    Cohen, I A

    2009-09-01

    Ketogenic weight-reduction dieting methods have existed since antiquity. Recent research has demonstrated their value in controlling type 2 diabetes. Although research done in the 1920s provided a mathematical model of non-weight-reduction ketogenic clinical diets using the concept of a ketogenic ratio (KR), little has been done to evaluate the ketogenic nature of purported ketogenic weight-reduction diets. The mathematical model of Woodyatt is valid only under isocaloric conditions where dietary energy intake is balanced by energy use. It is hypothesised that under certain conditions of weight loss, energy deficit can predict utilization of stored lipid so that a modified formula for total ketogenic ratio (TKR) may be derived. Such a predictive mathematical model may be a useful tool in predicting the efficacy of weight-reduction diets and adapting such diets to individual patient needs.

  12. Interannual Variability of Southern Ocean Diatom Production in a Model With Flexible Si:N:C:Chl Ratios

    Science.gov (United States)

    Voelker, C. D.; Hohn, S.; Losch, M.; Losa, S.; Wolf-Gladrow, D. A.

    2008-12-01

    The requirements of phytoplankton for nutrients strongly couple the fluxes of different biologically important elements, such as carbon, nitrogen, silicon and iron. In the Southern Ocean this coupling manifests itself in iron limitation controlling the primary production, but also influencing the Si:N-ratio of nutrient drawdown and biomass export. We investigate the interaction between nutrient inputs, biomass composition and the biological pump in the Southern Ocean with the help of a global biogeochemical model that allows for physiological variations of the Our model results agree with observations of enhanced Si:N drawdown ratios around Antarctica and show a belt of strong vertical flux of biogenic Si over the location of the 'opal belt' in the sediment. In the model, the export of BSi around Antarctica is increased through high Si:C ratios in phytoplankton caused by iron limitation. Integration of the model forced by meteorological reanalysis fields leads to interannual variability of sea ice extent, nutrient upwelling and sea surface temperature. We investigate the consequences of this variability for the biological pump in different regions of the Southern Ocean, comparing the model with JGOFS data, and analyse the role of stoichiometric variations.

  13. Unified model for the electromechanical coupling factor of orthorhombic piezoelectric rectangular bar with arbitrary aspect ratio

    Directory of Open Access Journals (Sweden)

    R. Rouffaud

    2017-02-01

    Full Text Available Piezoelectric Single Crystals (PSC are increasingly used in the manufacture of ultrasonic transducers and in particular for linear arrays or single element transducers. Among these PSCs, according to their microstructure and poled direction, some exhibit a mm2 symmetry. The analytical expression of the electromechanical coupling coefficient for a vibration mode along the poling direction for piezoelectric rectangular bar resonator is established. It is based on the mode coupling theory and fundamental energy ratio definition of electromechanical coupling coefficients. This unified formula for mm2 symmetry class material is obtained as a function of an aspect ratio (G where the two extreme cases correspond to a thin plate (with a vibration mode characterized by the thickness coupling factor, kt and a thin bar (characterized by k33′. To optimize the k33′ value related to the thin bar design, a rotation of the crystallogaphic axis in the plane orthogonal to the poling direction is done to choose the highest value for PIN-PMN-PT single crystal. Finally, finite element calculations are performed to deduce resonance frequencies and coupling coefficients in a large range of G value to confirm developed analytical relations.

  14. Differences in the Aspect Ratio of Gold Nanorods that Induce Defects in Cell Membrane Models.

    Science.gov (United States)

    Lins, Paula M P; Marangoni, Valéria S; Uehara, Thiers M; Miranda, Paulo B; Zucolotto, Valtencir; Cancino-Bernardi, Juliana

    2017-12-19

    Understanding the interactions between biomolecules and nanomaterials is of great importance for many areas of nanomedicine and bioapplications. Although studies in this area have been performed, the interactions between cell membranes and nanoparticles are not fully understood. Here, we investigate the interactions that occur between the Langmuir monolayers of dipalmitoylphosphatidyl glycerol (DPPG) and dipalmitoylphosphatidyl choline (DPPC) with gold nanorods (NR)-with three aspect ratios-and gold nanoparticles. Our results showed that the aspect ratio of the NRs influenced the interactions with both monolayers, which suggest that the physical morphology and electrostatic forces govern the interactions in the DPPG-NR system, whereas the van der Waals interactions are predominant in the DPPC-NR systems. Size influences the expansion isotherms in both systems, but the lipid tails remain conformationally ordered upon expansion, which suggests phase separation between the lipids and nanomaterials at the interface. The coexistence of lipid and NP regions affects the elasticity of the monolayer. When there is coexistence between two phases, the elasticity does not reflect the lipid packaging state but depends on the elasticity of the NP islands. Therefore, the results corroborate that nanomaterials influence the packing and the phase behavior of the mimetic cell membranes. For this reason, developing a methodology to understand the membrane-nanomaterial interactions is of great importance.

  15. A Conceptual Model of Future Volcanism at Medicine Lake Volcano, California - With an Emphasis on Understanding Local Volcanic Hazards

    Science.gov (United States)

    Molisee, D. D.; Germa, A.; Charbonnier, S. J.; Connor, C.

    2017-12-01

    Medicine Lake Volcano (MLV) is most voluminous of all the Cascade Volcanoes ( 600 km3), and has the highest eruption frequency after Mount St. Helens. Detailed mapping by USGS colleagues has shown that during the last 500,000 years MLV erupted >200 lava flows ranging from basalt to rhyolite, produced at least one ash-flow tuff, one caldera forming event, and at least 17 scoria cones. Underlying these units are 23 additional volcanic units that are considered to be pre-MLV in age. Despite the very high likelihood of future eruptions, fewer than 60 of 250 mapped volcanic units (MLV and pre-MLV) have been dated reliably. A robust set of eruptive ages is key to understanding the history of the MLV system and to forecasting the future behavior of the volcano. The goals of this study are to 1) obtain additional radiometric ages from stratigraphically strategic units; 2) recalculate recurrence rate of eruptions based on an augmented set of radiometric dates; and 3) use lava flow, PDC, ash fall-out, and lahar computational simulation models to assess the potential effects of discrete volcanic hazards locally and regionally. We identify undated target units (units in key stratigraphic positions to provide maximum chronological insight) and obtain field samples for radiometric dating (40Ar/39Ar and K/Ar) and petrology. Stratigraphic and radiometric data are then used together in the Volcano Event Age Model (VEAM) to identify changes in the rate and type of volcanic eruptions through time, with statistical uncertainty. These newly obtained datasets will be added to published data to build a conceptual model of volcanic hazards at MLV. Alternative conceptual models, for example, may be that the rate of MLV lava flow eruptions are nonstationary in time and/or space and/or volume. We explore the consequences of these alternative models on forecasting future eruptions. As different styles of activity have different impacts, we estimate these potential effects using simulation

  16. Allometric modeling does not determine a dimensionless power function ratio for maximal muscular function.

    Science.gov (United States)

    Batterham, A M; George, K P

    1997-12-01

    In the exercise sciences, simple allometry (y = axb) is rapidly becoming the method of choice for scaling physiological and human performance data for differences in body size. The purpose of this study is to detail the specific regression diagnostics required to validate such models. The sum (T, in kg) of the "snatch" and "clean-and-jerk" lifts of the medalists from the 1995 Men's and Women's World Weightlifting Championships was modeled as a function of body mass (M, in kg). A log-linearized allometric model (ln T = ln a + b ln M) yielded a common mass exponent (b) of 0. 47 (95% confidence interval = 0.43-0.51, P < 0.01). However, size-related patterned deviations in the residuals were evident, indicating that the allometric model was poorly specified and that the mass exponent was not size independent. Model respecification revealed that second-order polynomials provided the best fit, supporting previous modeling of weightlifting data (R. G. Sinclair. Can. J. Appl. Sport Sci. 10: 94-98, 1985). The model parameters (means +/- SE) were T = (21.48 +/- 16.55) + (6.119 +/- 0.359)M - (0. 022 +/- 0.002)M2 (R2 = 0.97) for men and T = (-20.73 +/- 24.14) + (5. 662 +/- 0.722)M - (0.031 +/- 0.005)M2 (R2 = 0.92) for women. We conclude that allometric scaling should be applied only when all underlying model assumptions have been rigorously evaluated.

  17. Modelling the effect of changing design fineness ratio of an airship on its aerodynamic lift and drag performance

    Science.gov (United States)

    Jalasabri, J.; Romli, F. I.; Harmin, M. Y.

    2017-12-01

    In developing successful airship designs, it is important to fully understand the effect of the design on the performance of the airship. The aim of this research work is to establish the trend for effects of design fineness ratio of an airship towards its aerodynamic performance. An approximate computer-aided design (CAD) model of the Atlant-100 airship is constructed using CATIA software and it is applied in the computational fluid dynamics (CFD) simulation analysis using Star-CCM+ software. In total, 36 simulation runs are executed with different combinations of values for design fineness ratio, altitude and velocity. The obtained simulation results are analyzed using MINITAB to capture the effects relationship on lift and drag coefficients. Based on the results, it is concluded that the design fineness ratio does have a significant impact on the generated aerodynamic lift and drag forces on the airship.

  18. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  19. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    Science.gov (United States)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  20. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    Science.gov (United States)

    McNamara, Daniel E.; Yeck, William; Barnhart, William D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, Amod; Hough, S.E.; Benz, Harley M.; Earle, Paul

    2017-01-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard.Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a ~ 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10–15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  1. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    Science.gov (United States)

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  2. Computer Models Used to Support Cleanup Decision Making at Hazardous and Radioactive Waste Sites

    Science.gov (United States)

    This report is a product of the Interagency Environmental Pathway Modeling Workgroup. This report will help bring a uniform approach to solving environmental modeling problems common to site remediation and restoration efforts.

  3. Modeling speech intelligibility based on the signal-to-noise envelope power ratio

    DEFF Research Database (Denmark)

    Jørgensen, Søren

    background noise, reverberation and noise reduction processing on speech intelligibility, indicating that the model is more general than traditional modeling approaches. Moreover, the model accounts for phase distortions when it includes a mechanism that evaluates the variation of envelope power across...... (audio) frequency. However, because the SNRenv is based on the long-term average envelope power, the model cannot account for the greater intelligibility typically observed in fluctuating noise compared to stationary noise. To overcome this limitation, a multi-resolution version of the sEPSM is presented...... distorted by reverberation or spectral subtraction. The relationship between the SNRenv based decision-metric and psychoacoustic speech intelligibility is further evaluated by generating stimuli with different SNRenv but the same overall power SNR. The results from the corresponding psychoacoustic data...

  4. Site characterization and modeling to estimate movement of hazardous materials in groundwater

    International Nuclear Information System (INIS)

    Ditmars, J.D.

    1988-01-01

    A quantitative approach for evaluating the effectiveness of site characterization measurement activities is developed and illustrated with an example application to hypothetical measurement schemes at a potential geologic repository site for radioactive waste. The method is a general one and could also be applied at sites for underground disposal of hazardous chemicals. The approach presumes that measurements will be undertaken to support predictions of the performance of some aspect of a constructed facility or natural system. It requires a quantitative performance objective, such as groundwater travel time or contaminant concentration, against which to compare predictions of performance. The approach recognizes that such predictions are uncertain because the measurements upon which they are based are uncertain. The effectiveness of measurement activities is quantified by a confidence index, β, that reflects the number of standard deviations separating the best estimate of performance from the perdetermined performance objective. Measurements that reduce the uncertainty in predictions lead to increased values of β. The link between measurement and prediction uncertainties, required for the evaluation of β for a particular measurement scheme, identifies the measured quantities that significantly affect prediction uncertainty. The components of uncertainty in those key measurements are spatial variation, noise, estimation error, and measurement bias. 7 refs., 4 figs

  5. Hydrogen and oxygen isotope ratios in body water and hair: modeling isotope dynamics in nonhuman primates.

    Science.gov (United States)

    O'Grady, Shannon P; Valenzuela, Luciano O; Remien, Christopher H; Enright, Lindsey E; Jorgensen, Matthew J; Kaplan, Jay R; Wagner, Janice D; Cerling, Thure E; Ehleringer, James R

    2012-07-01

    The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water ((2)H/(1)H, (18)O/(16)O expressed as δ(2) H and δ(18)O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ(2)H and δ(18)O values of body water and a second model to predict the δ(2)H and δ(18)O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ(2)H and δ(18)O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ(2)H and δ(18)O values of gut water and the (18)O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (α(ow)). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of α(ow) was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. © 2012 Wiley Periodicals, Inc.

  6. Effects of Pt and ionomer ratios on the structure of catalyst layer: A theoretical model for polymer electrolyte fuel cells

    Science.gov (United States)

    Ishikawa, H.; Sugawara, Y.; Inoue, G.; Kawase, M.

    2018-01-01

    The 3D structure of the catalyst layer (CL) in the polymer electrolyte fuel cell (PEFC) is modeled with a Pt/carbon (Pt/C) ratio of 0.4-2.3 and ionomer/carbon (i/C) ratio of 0.5-1.5, and the structural properties are evaluated by numerical simulation. The models are constructed by mimicking the actual shapes of Pt particles and carbon aggregates, as well as the ionomer adhesion in real CLs. CLs with different compositions are characterized by structural properties such as Pt inter-particle distance, ionomer coating thickness, pore size distribution, tortuosity, and ionomer coverage on Pt. The results for Pt/C = 1.0, i/C = 1.0 with Pt loading of 0.3 mg cm-2 and 50% porosity are validated against measured data for CLs with the same composition. With increasing i/C ratio, the smaller pores disappear and the number of isolated pores increases; while the ionomer connection and its coverage on Pt are significantly enhanced at i/C ∼1.0. With increasing Pt/C ratio, the Pt inter-particle distance decreases as the particles connect with each other. The tortuosity of the pores and the ionomer exhibits a trade-off relation depending on the ionomer volume. Further CL design concepts to optimize both O2 diffusion and H+ conduction are discussed.

  7. Climate change in a Point-Over-Threshold model: an example on ocean-wave-storm hazard in NE Spain

    Science.gov (United States)

    Tolosana-Delgado, R.; Ortego, M. I.; Egozcue, J. J.; Sánchez-Arcilla, A.

    2009-09-01

    Climatic change is a problem of general concern. When dealing with hazardous events such as wind-storms, heavy rainfall or ocean-wave storms this concern is even more serious. Climate change might imply an increase of human and material losses, and it is worth devoting efforts to detect it. Hazard assessment of such events is often carried out with a point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude of each event is modelled as an arbitrary random variable, which upper tail is described by a Generalized Pareto Distribution (GPD). Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. The GPD models excesses over a threshold. If X is the magnitude of an event and x0 a value of the support of X, the excess over the threshold x0 is Y = X - x0, conditioned to X > x0. Therefore, the support of Y is (a segment of) the positive real line. The GPD model has a scale and a shape parameter. The scale parameter of the distribution is β > 0. The shape parameter, ? is real-valued, and it defines three different sub-families of distributions. GPD distributions with ? 0, distributions have infinite heavy tails (ysup = +? ), and for ? = 0 we obtain the exponential distribution, which has an infinite support but a well-behaved tail. The GPD distribution function is ( ? )- 1 ? FY(y|β,?) = 1- 1+ β-y , 0 ? y case study, we may be sure that there is a maximal height related to physical limitations (sea depth, fetch distance, water density, etc.). Thus, we choose as a priori statement that ? successfully for daily rainfall data and ocean-wave-height. How to assess impact of climate change on hazardous events? In a climate change scenario, we can consider the model for description of the variable as stable, while its parameters may be taken as a function of time. Thus, magnitudes are taken in a log-scale. Excesses over a threshold are modeled by a GPD with a

  8. Pseudopotential multi-relaxation-time lattice Boltzmann model for cavitation bubble collapse with high density ratio

    International Nuclear Information System (INIS)

    Shan Ming-Lei; Zhu Chang-Ping; Yao Cheng; Yin Cheng; Jiang Xiao-Yan

    2016-01-01

    The dynamics of the cavitation bubble collapse is a fundamental issue for the bubble collapse application and prevention. In the present work, the modified forcing scheme for the pseudopotential multi-relaxation-time lattice Boltzmann model developed by Li Q et al. [Li Q, Luo K H and Li X J 2013 Phys. Rev. E 87 053301] is adopted to develop a cavitation bubble collapse model. In the respects of coexistence curves and Laplace law verification, the improved pseudopotential multi-relaxation-time lattice Boltzmann model is investigated. It is found that the thermodynamic consistency and surface tension are independent of kinematic viscosity. By homogeneous and heterogeneous cavitation simulation, the ability of the present model to describe the cavitation bubble development as well as the cavitation inception is verified. The bubble collapse between two parallel walls is simulated. The dynamic process of a collapsing bubble is consistent with the results from experiments and simulations by other numerical methods. It is demonstrated that the present pseudopotential multi-relaxation-time lattice Boltzmann model is applicable and efficient, and the lattice Boltzmann method is an alternative tool for collapsing bubble modeling. (paper)

  9. Inverse modeling of GOSAT-retrieved ratios of total column CH4 and CO2 for 2009 and 2010

    Directory of Open Access Journals (Sweden)

    S. Pandey

    2016-04-01

    Full Text Available This study investigates the constraint provided by greenhouse gas measurements from space on surface fluxes. Imperfect knowledge of the light path through the atmosphere, arising from scattering by clouds and aerosols, can create biases in column measurements retrieved from space. To minimize the impact of such biases, ratios of total column retrieved CH4 and CO2 (Xratio have been used. We apply the ratio inversion method described in Pandey et al. (2015 to retrievals from the Greenhouse Gases Observing SATellite (GOSAT. The ratio inversion method uses the measured Xratio as a weak constraint on CO2 fluxes. In contrast, the more common approach of inverting proxy CH4 retrievals (Frankenberg et al., 2005 prescribes atmospheric CO2 fields and optimizes only CH4 fluxes. The TM5–4DVAR (Tracer Transport Model version 5–variational data assimilation system inverse modeling system is used to simultaneously optimize the fluxes of CH4 and CO2 for 2009 and 2010. The results are compared to proxy inversions using model-derived CO2 mixing ratios (XCO2model from CarbonTracker and the Monitoring Atmospheric Composition and Climate (MACC Reanalysis CO2 product. The performance of the inverse models is evaluated using measurements from three aircraft measurement projects. Xratio and XCO2model are compared with TCCON retrievals to quantify the relative importance of errors in these components of the proxy XCH4 retrieval (XCH4proxy. We find that the retrieval errors in Xratio (mean  =  0.61 % are generally larger than the errors in XCO2model (mean  =  0.24 and 0.01 % for CarbonTracker and MACC, respectively. On the annual timescale, the CH4 fluxes from the different satellite inversions are generally in agreement with each other, suggesting that errors in XCO2model do not limit the overall accuracy of the CH4 flux estimates. On the seasonal timescale, however, larger differences are found due to uncertainties in XCO2model, particularly

  10. Inverse modeling of GOSAT-retrieved ratios of total column CH4 and CO2 for 2009 and 2010

    Science.gov (United States)

    Pandey, Sudhanshu; Houweling, Sander; Krol, Maarten; Aben, Ilse; Chevallier, Frédéric; Dlugokencky, Edward J.; Gatti, Luciana V.; Gloor, Emanuel; Miller, John B.; Detmers, Rob; Machida, Toshinobu; Röckmann, Thomas

    2016-04-01

    This study investigates the constraint provided by greenhouse gas measurements from space on surface fluxes. Imperfect knowledge of the light path through the atmosphere, arising from scattering by clouds and aerosols, can create biases in column measurements retrieved from space. To minimize the impact of such biases, ratios of total column retrieved CH4 and CO2 (Xratio) have been used. We apply the ratio inversion method described in Pandey et al. (2015) to retrievals from the Greenhouse Gases Observing SATellite (GOSAT). The ratio inversion method uses the measured Xratio as a weak constraint on CO2 fluxes. In contrast, the more common approach of inverting proxy CH4 retrievals (Frankenberg et al., 2005) prescribes atmospheric CO2 fields and optimizes only CH4 fluxes. The TM5-4DVAR (Tracer Transport Model version 5-variational data assimilation system) inverse modeling system is used to simultaneously optimize the fluxes of CH4 and CO2 for 2009 and 2010. The results are compared to proxy inversions using model-derived CO2 mixing ratios (XCO2model) from CarbonTracker and the Monitoring Atmospheric Composition and Climate (MACC) Reanalysis CO2 product. The performance of the inverse models is evaluated using measurements from three aircraft measurement projects. Xratio and XCO2model are compared with TCCON retrievals to quantify the relative importance of errors in these components of the proxy XCH4 retrieval (XCH4proxy). We find that the retrieval errors in Xratio (mean = 0.61 %) are generally larger than the errors in XCO2model (mean = 0.24 and 0.01 % for CarbonTracker and MACC, respectively). On the annual timescale, the CH4 fluxes from the different satellite inversions are generally in agreement with each other, suggesting that errors in XCO2model do not limit the overall accuracy of the CH4 flux estimates. On the seasonal timescale, however, larger differences are found due to uncertainties in XCO2model, particularly over Australia and in the tropics. The

  11. Bi-Objective Modelling for Hazardous Materials Road-Rail Multimodal Routing Problem with Railway Schedule-Based Space-Time Constraints.

    Science.gov (United States)

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-07-28

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study.

  12. Bi-Objective Modelling for Hazardous Materials Road–Rail Multimodal Routing Problem with Railway Schedule-Based Space–Time Constraints

    Science.gov (United States)

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-01-01

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294

  13. Performance of Models for Flash Flood Warning and Hazard Assessment: The 2015 Kali Gandaki Landslide Dam Breach in Nepal

    Directory of Open Access Journals (Sweden)

    Jeremy D. Bricker

    2017-02-01

    Full Text Available The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER, as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.

  14. ''Hazardous'' terminology

    International Nuclear Information System (INIS)

    Powers, J.

    1991-01-01

    A number of terms (e.g., ''hazardous chemicals,'' ''hazardous materials,'' ''hazardous waste,'' and similar nomenclature) refer to substances that are subject to regulation under one or more federal environmental laws. State laws and regulations also provide additional, similar, or identical terminology that may be confused with the federally defined terms. Many of these terms appear synonymous, and it easy to use them interchangeably. However, in a regulatory context, inappropriate use of narrowly defined terms can lead to confusion about the substances referred to, the statutory provisions that apply, and the regulatory requirements for compliance under the applicable federal statutes. This information Brief provides regulatory definitions, a brief discussion of compliance requirements, and references for the precise terminology that should be used when referring to ''hazardous'' substances regulated under federal environmental laws. A companion CERCLA Information Brief (EH-231-004/0191) addresses ''toxic'' nomenclature

  15. Reliable likelihood ratios for statistical model-based voice activity detector with low false-alarm rate

    Directory of Open Access Journals (Sweden)

    Kim Younggwan

    2011-01-01

    Full Text Available Abstract The role of the statistical model-based voice activity detector (SMVAD is to detect speech regions from input signals using the statistical models of noise and noisy speech. The decision rule of SMVAD is based on the likelihood ratio test (LRT. The LRT-based decision rule may cause detection errors because of statistical properties of noise and speech signals. In this article, we first analyze the reasons why the detection errors occur and then propose two modified decision rules using reliable likelihood ratios (LRs. We also propose an effective weighting scheme considering spectral characteristics of noise and speech signals. In the experiments proposed in this study, with almost no additional computations, the proposed methods show significant performance improvement in various noise conditions. Experimental results also show that the proposed weighting scheme provides additional performance improvement over the two proposed SMVADs.

  16. Reliable likelihood ratios for statistical model-based voice activity detector with low false-alarm rate

    Science.gov (United States)

    Kim, Younggwan; Suh, Youngjoo; Kim, Hoirin

    2011-12-01

    The role of the statistical model-based voice activity detector (SMVAD) is to detect speech regions from input signals using the statistical models of noise and noisy speech. The decision rule of SMVAD is based on the likelihood ratio test (LRT). The LRT-based decision rule may cause detection errors because of statistical properties of noise and speech signals. In this article, we first analyze the reasons why the detection errors occur and then propose two modified decision rules using reliable likelihood ratios (LRs). We also propose an effective weighting scheme considering spectral characteristics of noise and speech signals. In the experiments proposed in this study, with almost no additional computations, the proposed methods show significant performance improvement in various noise conditions. Experimental results also show that the proposed weighting scheme provides additional performance improvement over the two proposed SMVADs.

  17. Welding hazards

    International Nuclear Information System (INIS)

    Khan, M.A.

    1992-01-01

    Welding technology is advancing rapidly in the developed countries and has converted into a science. Welding involving the use of electricity include resistance welding. Welding shops are opened in residential area, which was causing safety hazards, particularly the teenagers and children who eagerly see the welding arc with their naked eyes. There are radiation hazards from ultra violet rays which irritate the skin, eye irritation. Welding arc light of such intensity could damage the eyes. (Orig./A.B.)

  18. Carbon Structure Hazard Control

    Science.gov (United States)

    Yoder, Tommy; Greene, Ben; Porter, Alan

    2015-01-01

    Carbon composite structures are widely used in virtually all advanced technology industries for a multitude of applications. The high strength-to-weight ratio and resistance to aggressive service environments make them highly desirable. Automotive, aerospace, and petroleum industries extensively use, and will continue to use, this enabling technology. As a result of this broad range of use, field and test personnel are increasingly exposed to hazards associated with these structures. No single published document exists to address the hazards and make recommendations for the hazard controls required for the different exposure possibilities from damaged structures including airborne fibers, fly, and dust. The potential for personnel exposure varies depending on the application or manipulation of the structure. The effect of exposure to carbon hazards is not limited to personnel, protection of electronics and mechanical equipment must be considered as well. The various exposure opportunities defined in this document include pre-manufacturing fly and dust, the cured structure, manufacturing/machining, post-event cleanup, and post-event test and/or evaluation. Hazard control is defined as it is applicable or applied for the specific exposure opportunity. The carbon exposure hazard includes fly, dust, fiber (cured/uncured), and matrix vapor/thermal decomposition products. By using the recommendations in this document, a high level of confidence can be assured for the protection of personnel and equipment.

  19. Qualitative Analysis of a Diffusive Ratio-Dependent Holling-Tanner Predator-Prey Model with Smith Growth

    Directory of Open Access Journals (Sweden)

    Zongmin Yue

    2013-01-01

    Full Text Available We investigated the dynamics of a diffusive ratio-dependent Holling-Tanner predator-prey model with Smith growth subject to zero-flux boundary condition. Some qualitative properties, including the dissipation, persistence, and local and global stability of positive constant solution, are discussed. Moreover, we give the refined a priori estimates of positive solutions and derive some results for the existence and nonexistence of nonconstant positive steady state.

  20. Spatio-temporal hazard estimation in the Auckland Volcanic Field, New Zealand, with a new event-order model

    Science.gov (United States)

    Bebbington, Mark S.; Cronin, Shane J.

    2011-01-01

    The Auckland Volcanic Field (AVF) with 49 eruptive centres in the last c. 250 ka presents many challenges to our understanding of distributed volcanic field construction and evolution. We re-examine the age constraints within the AVF and perform a correlation exercise matching the well-dated record of tephras from cores distributed throughout the field to the most likely source volcanoes, using thickness and location information and a simple attenuation model. Combining this augmented age information with known stratigraphic constraints, we produce a new age-order algorithm for the field, with errors incorporated using a Monte Carlo procedure. Analysis of the new age model discounts earlier appreciations of spatio-temporal clustering in the AVF. Instead the spatial and temporal aspects appear independent; hence the location of the last eruption provides no information about the next location. The temporal hazard intensity in the field has been highly variable, with over 63% of its centres formed in a high-intensity period between 40 and 20 ka. Another, smaller, high-intensity period may have occurred at the field onset, while the latest event, at 504 ± 5 years B.P., erupted 50% of the entire field's volume. This emphasises the lack of steady-state behaviour that characterises the AVF, which may also be the case in longer-lived fields with a lower dating resolution. Spatial hazard intensity in the AVF under the new age model shows a strong NE-SW structural control of volcanism that may reflect deep-seated crustal or subduction zone processes and matches the orientation of the Taupo Volcanic Zone to the south.

  1. Development of a tornado wind speed hazard model for limited area (TOWLA) for nuclear power plants at a coastline

    International Nuclear Information System (INIS)

    Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo

    2016-01-01

    It is necessary for Japanese electric power companies to assess tornado risks on the nuclear power plants according to a new regulation in 2013. The new regulatory guide recommends to select a long narrow strip area along a coast line with the width of 5 km to the seaward and landward sides as a target area of tornado risk assessment, because most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. However, it is very difficult to evaluate a tornado hazard along a coast line, because there is no available information of F-scale and damage length/width on tornadic waterspouts. The purpose of this study is to propose a new tornado wind hazard model for limited area (TOWLA), which can be apply to a long narrow strip area along a coastline. In order to consider tornadic waterspouts moved inland, we evaluate the number of waterspouts entering/passing the targeting area, and add them to the total number of the tornado occurred in the area. A characteristic of the model is to use 'segment lengths' instead of damage lengths. The segment length is a part of the tornado foot print in the long narrow strip area. We show two methods for segment length computation. One is based on tornado records; latitude and longitude of tornado genesis and dissipation locations. The other is to compute the expected segment length based on the geometrical relationship among the damage length, area width, and directional characteristics of tornado movement. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

  2. Hydrological risks in anthropized watersheds: modeling of hazard, vulnerability and impacts on population from south-west of Madagascar

    Science.gov (United States)

    Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore

    2016-04-01

    Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using

  3. Digital elevation models in the marine domain: investigating the offshore tsunami hazard from submarine landslides

    Science.gov (United States)

    Tappin, David R.

    2015-04-01

    the resolution necessary to identify the hazard from landslides, particularly along convergent margins where this hazard is the greatest. Multibeam mapping of the deep seabed requires low frequency sound sources that, because of their corresponding low resolution, cannot produce the detail required to identify the finest scale features. In addition, outside of most countries, there are not the repeat surveys that allow seabed changes to be identified. Perhaps only japan has this data. In the near future as research budgets shrink and ship time becomes ever expensive new strategies will have to be used to make best use of the vessels available. Remote AUV technology is almost certainly the answer, and should be increasingly utilised to map the seabed while the mother ship is better used to carry out other duties, such as sampling or seismic data acquisition. This will have the advantage in the deep ocean of acquiring higher resolution data from high frequency multibeams. This talk presents on a number of projects that show the evolution of the use of MBES in mapping submarine landslides since the PNG tsunami. Data from PNG is presented, together with data from Japan, Hawaii and the NE Atlantic. New multibeam acquisition methodologies are also discussed.

  4. A Diffuse Interface Model for Incompressible Two-Phase Flow with Large Density Ratios

    KAUST Repository

    Xie, Yu

    2016-10-04

    In this chapter, we explore numerical simulations of incompressible and immiscible two-phase flows. The description of the fluid–fluid interface is introduced via a diffuse interface approach. The two-phase fluid system is represented by a coupled Cahn–Hilliard Navier–Stokes set of equations. We discuss challenges and approaches to solving this coupled set of equations using a stabilized finite element formulation, especially in the case of a large density ratio between the two fluids. Specific features that enabled efficient solution of the equations include: (i) a conservative form of the convective term in the Cahn–Hilliard equation which ensures mass conservation of both fluid components; (ii) a continuous formula to compute the interfacial surface tension which results in lower requirement on the spatial resolution of the interface; and (iii) a four-step fractional scheme to decouple pressure from velocity in the Navier–Stokes equation. These are integrated with standard streamline-upwind Petrov–Galerkin stabilization to avoid spurious oscillations. We perform numerical tests to determine the minimal resolution of spatial discretization. Finally, we illustrate the accuracy of the framework using the analytical results of Prosperetti for a damped oscillating interface between two fluids with a density contrast.

  5. Scavenging ratios

    International Nuclear Information System (INIS)

    Krey, P.W.; Toonkel, L.E.

    1977-01-01

    Total 90 Sr fallout is adjusted for dry deposition, and scavenging ratios are calculated at Seattle, New York, and Fayetteville, Ark. Stable-lead scavenging ratios are also presented for New York. These ratios show large scatter, but average values are generally inversely proportional to precipitation. Stable-lead ratios decrease more rapidly with precipitation than do those of 90 Sr, a decrease reflecting a lesser availability of lead to the scavenging processes

  6. A model for roll stall and the inherent stability modes of low aspect ratio wings at low Reynolds numbers

    Science.gov (United States)

    Shields, Matt

    The development of Micro Aerial Vehicles has been hindered by the poor understanding of the aerodynamic loading and stability and control properties of the low Reynolds number regime in which the inherent low aspect ratio (LAR) wings operate. This thesis experimentally evaluates the static and damping aerodynamic stability derivatives to provide a complete aerodynamic model for canonical flat plate wings of aspect ratios near unity at Reynolds numbers under 1 x 105. This permits the complete functionality of the aerodynamic forces and moments to be expressed and the equations of motion to solved, thereby identifying the inherent stability properties of the wing. This provides a basis for characterizing the stability of full vehicles. The influence of the tip vortices during sideslip perturbations is found to induce a loading condition referred to as roll stall, a significant roll moment created by the spanwise induced velocity asymmetry related to the displacement of the vortex cores relative to the wing. Roll stall is manifested by a linearly increasing roll moment with low to moderate angles of attack and a subsequent stall event similar to a lift polar; this behavior is not experienced by conventional (high aspect ratio) wings. The resulting large magnitude of the roll stability derivative, Cl,beta and lack of roll damping, Cl ,rho, create significant modal responses of the lateral state variables; a linear model used to evaluate these modes is shown to accurately reflect the solution obtained by numerically integrating the nonlinear equations. An unstable Dutch roll mode dominates the behavior of the wing for small perturbations from equilibrium, and in the presence of angle of attack oscillations a previously unconsidered coupled mode, referred to as roll resonance, is seen develop and drive the bank angle? away from equilibrium. Roll resonance requires a linear time variant (LTV) model to capture the behavior of the bank angle, which is attributed to the

  7. Rich dynamics of a food chain model with ratio-dependent type III ...

    African Journals Online (AJOL)

    user

    It breaks the stable behaviour of model and drives it to unstable state. Keywords: Food .... System (b) is proposed based on the assumption that in the absence of predator the prey satisfies the Hutchinson's equation. 2. A time-delay τ in the ...... By Descarte's rule of sign, the cubic equation (18d) has at least one positive root.

  8. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany structures. This ratio comes from Fibonacci numbers.In this article, we explore this ...

  9. U.S. Department of Energy Workers' mental models of radiation and chemical hazards in the workplace

    International Nuclear Information System (INIS)

    Quadrel, M.J.; Blanchard, K.A.; Lundgren, R.E.; McMakin, A.H.; Mosley, M.T.; Strom, D.J.

    1994-05-01

    A pilot study was performed to test the mental models methodology regarding knowledge and perceptions of U.S. Department of Energy contractor radiation workers about ionizing radiation and hazardous chemicals. The mental models methodology establishes a target population's beliefs about risks and compares them with current scientific knowledge. The ultimate intent is to develop risk communication guidelines that address information gaps or misperceptions that could affect decisions and behavior. In this study, 15 radiation workers from the Hanford Site in Washington State were interviewed about radiation exposure processes and effects. Their beliefs were mapped onto a science model of the same topics to see where differences occurred. In general, workers' mental models covered many of the high-level parts of the science model but did not have the same level of detail. The following concepts appeared to be well understood by most interviewees: types, form, and properties of workplace radiation; administrative and physical controls to reduce radiation exposure risk; and the relationship of dose and effects. However, several concepts were rarely mentioned by most interviewees, indicating potential gaps in worker understanding. Most workers did not discuss the wide range of measures for neutralizing or decontaminating individuals following internal contamination. Few noted specific ways of measuring dose or factors that affect dose. Few mentioned the range of possible effects, including genetic effects, birth defects, or high dose effects. Variables that influence potential effects were rarely discussed. Workers rarely mentioned how basic radiation principles influenced the source, type, or mitigation of radiation risk in the workplace

  10. Introduction: Hazard mapping

    Science.gov (United States)

    Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M

    2014-01-01

    Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.

  11. Estimation in the positive stable shared frailty Cox proportional hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Pipper, Christian Bressen

    2005-01-01

    model in situations where the correlated survival data show a decreasing association with time. In this paper, we devise a likelihood based estimation procedure for the positive stable shared frailty Cox model, which is expected to obtain high efficiency. The proposed estimator is provided with large...

  12. Deep gray matter demyelination detected by magnetization transfer ratio in the cuprizone model.

    Directory of Open Access Journals (Sweden)

    Sveinung Fjær

    Full Text Available In multiple sclerosis (MS, the correlation between lesion load on conventional magnetic resonance imaging (MRI and clinical disability is weak. This clinico-radiological paradox might partly be due to the low sensitivity of conventional MRI to detect gray matter demyelination. Magnetization transfer ratio (MTR has previously been shown to detect white matter demyelination in mice. In this study, we investigated whether MTR can detect gray matter demyelination in cuprizone exposed mice. A total of 54 female C57BL/6 mice were split into one control group ( and eight cuprizone exposed groups ([Formula: see text]. The mice were exposed to [Formula: see text] (w/w cuprizone for up to six weeks. MTR images were obtained at a 7 Tesla Bruker MR-scanner before cuprizone exposure, weekly for six weeks during cuprizone exposure, and once two weeks after termination of cuprizone exposure. Immunohistochemistry staining for myelin (anti-Proteolopid Protein and oligodendrocytes (anti-Neurite Outgrowth Inhibitor Protein A was obtained after each weekly scanning. Rates of MTR change and correlations between MTR values and histological findings were calculated in five brain regions. In the corpus callosum and the deep gray matter a significant rate of MTR value decrease was found, [Formula: see text] per week ([Formula: see text] and [Formula: see text] per week ([Formula: see text] respectively. The MTR values correlated to myelin loss as evaluated by immunohistochemistry (Corpus callosum: [Formula: see text]. Deep gray matter: [Formula: see text], but did not correlate to oligodendrocyte density. Significant results were not found in the cerebellum, the olfactory bulb or the cerebral cortex. This study shows that MTR can be used to detect demyelination in the deep gray matter, which is of particular interest for imaging of patients with MS, as deep gray matter demyelination is common in MS, and is not easily detected on conventional clinical MRI.

  13. Optimum Concentration Ratio Analysis Using Dynamic Thermal Model for Concentrated Photovoltaic System

    Science.gov (United States)

    2012-03-22

    different semiconductor material solar cells, such as GaAs, InGaP, CdTe , and other high-efficiency cell materials to investigate the thermal properties...Technology. Norwell, MA : Kluwer Academic Publishers, 1997. 39. Model of Photovoltaic Module in MATLAB. Gonzalez-Longatt, Francisco M. 2DO Congreso...of Hong Kong, 2010. 44. A Simple Passive Cooling Structure and its Heat Analysis for 500 X Concentration PV Module . Araki, Kenji, Uozumi, Hisafumi

  14. Numerical estimation of wall friction ratio near the pseudo-critical point with CFD-models

    International Nuclear Information System (INIS)

    Angelucci, M.; Ambrosini, W.; Forgione, N.

    2013-01-01

    In this paper, the STAR-CCM+ CFD code is used in the attempt to reproduce the values of friction factor observed in experimental data at supercritical pressures at various operating conditions. A short survey of available data and correlations for smooth pipe friction in circular pipes puts the basis for the discussion, reporting observed trends of friction factor in the liquid-like and the gas-like regions and within the transitional region across the pseudo-critical temperature. For smooth pipes, a general decrease of the friction factor in the transitional region is reported, constituting one of the relevant effects to be predicted by the computational fluid-dynamic models. A limited number of low-Reynolds number models are adopted, making use of refined near-wall discretisation as required by the constraint y + < 1 at the wall. In particular, the Lien k–ε and the SST k–ω models are considered. The values of the wall shear stress calculated by the code are then post-processed on the basis of bulk fluid properties to obtain the Fanning and then the Darcy–Weisbach friction factors, based on their classical definitions. The obtained values are compared with those provided by experimental tests and correlations, finding a reasonable qualitative agreement. Expectedly, the agreement is better in the gas-like and liquid-like regions, where fluid property changes are moderate, than in the transitional region, where the trends provided by available correlations are reproduced only in a qualitative way

  15. Ratios between effective doses for tomographic and mathematician models due to internal exposure of photons

    International Nuclear Information System (INIS)

    Lima, F.R.A.; Kramer, R.; Khoury, H.J.; Santos, A.M.; Loureiro, E.C.M.

    2005-01-01

    The development of new and sophisticated Monte Carlo codes and tomographic human phantoms or voxels motivated the International Commission on Radiological Protection (ICRP) to revise the traditional models of exposure, which have been used to calculate effective dose coefficients for organs and tissues based on mathematician phantoms known as MIRD5. This paper shows the results of calculations using tomographic phantoms MAX (Male Adult voXel) and FAX (Female Adult voXel), recently developed by the authors as well as with the phantoms ADAM and EVA, of specific genres, type MIRD5, coupled to the EGS4 Monte Carlo and MCNP4C codes, for internal exposure with photons of energies between 10 keV and 4 MeV to several organs sources. Effective Doses for both models, tomographic and mathematician, will be compared separately as a function of the Monte Carlo code replacement, of compositions of human tissues and the anatomy reproduced through tomographs. The results indicate that for photon internal exposure, the use of models of exposure based in voxel, increases the values of effective doses up to 70% for some organs sources considered in this study, when compared with the corresponding results obtained with phantoms of MIRD-5 type

  16. Model test on the relationship feed energy and protein ratio to the production and quality of milk protein

    Science.gov (United States)

    Hartanto, R.; Jantra, M. A. C.; Santosa, S. A. B.; Purnomoadi, A.

    2018-01-01

    The purpose of this research was to find an appropriate relationship model between the feed energy and protein ratio with the amount of production and quality of milk proteins. This research was conducted at Getasan Sub-district, Semarang Regency, Central Java Province, Indonesia using 40 samples (Holstein Friesian cattle, lactation period II-III and lactation month 3-4). Data were analyzed using linear and quadratic regressions, to predict the production and quality of milk protein from feed energy and protein ratio that describe the diet. The significance of model was tested using analysis of variance. Coefficient of determination (R2), residual variance (RV) and root mean square prediction error (RMSPE) were reported for the developed equations as an indicator of the goodness of model fit. The results showed no relationship in milk protein (kg), milk casein (%), milk casein (kg) and milk urea N (mg/dl) as function of CP/TDN. The significant relationship was observed in milk production (L or kg) and milk protein (%) as function of CP/TDN, both in linear and quadratic models. In addition, a quadratic change in milk production (L) (P = 0.003), milk production (kg) (P = 0.003) and milk protein concentration (%) (P = 0.026) were observed with increase of CP/TDN. It can be concluded that quadratic equation was the good fitting model for this research, because quadratic equation has larger R2, smaller RV and smaller RMSPE than those of linear equation.

  17. A model composition for Mars derived from the oxygen isotopic ratios of martian/SNC meteorites. [Abstract only

    Science.gov (United States)

    Delaney, J. S.

    1994-01-01

    Oxygen is the most abundant element in most meteorites, yet the ratios of its isotopes are seldom used to constrain the compositional history of achondrites. The two major achondrite groups have O isotope signatures that differ from any plausible chondritic precursors and lie between the ordinary and carbonaceous chondrite domains. If the assumption is made that the present global sampling of chondritic meteorites reflects the variability of O reservoirs at the time of planetessimal/planet aggregation in the early nebula, then the O in these groups must reflect mixing between known chondritic reservoirs. This approach, in combination with constraints based on Fe-Mn-Mg systematics, has been used previously to model the composition of the basaltic achondrite parent body (BAP) and provides a model precursor composition that is generally consistent with previous eucrite parent body (EPB) estimates. The same approach is applied to Mars exploiting the assumption that the SNC and related meteorites sample the martian lithosphere. Model planet and planetesimal compositions can be derived by mixing of known chondritic components using O isotope ratios as the fundamental compositional constraint. The major- and minor-element composition for Mars derived here and that derived previously for the basaltic achondrite parent body are, in many respects, compatible with model compositions generated using completely independent constraints. The role of volatile elements and alkalis in particular remains a major difficulty in applying such models.

  18. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  19. The hazard education model in the high school science-club activities above active huge fault

    Science.gov (United States)

    Nakamura, R.

    2017-12-01

    Along the west coast of pacific ocean, includes Japan, there are huge numerous volcanoes and earthquakes. The biggest cause is their location on the border of plates. The pressure among the plates cause strains and cracks. By the island arc lines, strains make long and enormous faults. More than huge 150 faults are reported (the head quarters for earthquake research promotion, Japan, 2017). Below my working school, it is laying one of the biggest faults Nagamachi-Rifu line which is also laying under 1 million population city Sendai. Before 2011 Tohoku earthquake, one of the hugest earthquake was predicted because of the fault activities. Investigating the fault activity with our school student who live in the closest area is one of the most important hazard education. Therefore, now we are constructing the science club activity with make attention for (1) seeking fault line(s) with topographic land maps and on foot search (2) investigate boling core sample soils that was brought in our school founded. (1) Estimate of displacement of the faults on foot observation In order to seek the unknown fault line in Rifu area, at first it was needed to estimate on the maps(1:25,000 Scale Topographic Maps and Active Faults in Urban Area of Map(Sendai), Geographical Survey Institute of Japan). After that estimation, walked over the region with club students to observe slopes which was occurred by the faults activation and recorded on the maps. By observant slope gaps, there has a possibilities to have 3 or 4 fault lines that are located parallel to the known activate faults. (2) Investigate of the boling core samples above the fault. We investigated 6 columnar-shaped boling core samples which were excavated when the school has been built. The maximum depth of the samples are over 20m, some are new filled sands over original ash tephra and pumice from old volcanoes located west direction. In the club activities, we described column diagram of sediments and discussed the sediment

  20. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  1. Profiling water vapor mixing ratios in Finland by means of a Raman lidar, a satellite and a model

    Science.gov (United States)

    Filioglou, Maria; Nikandrova, Anna; Niemelä, Sami; Baars, Holger; Mielonen, Tero; Leskinen, Ari; Brus, David; Romakkaniemi, Sami; Giannakaki, Elina; Komppula, Mika

    2017-11-01

    We present tropospheric water vapor profiles measured with a Raman lidar during three field campaigns held in Finland. Co-located radio soundings are available throughout the period for the calibration of the lidar signals. We investigate the possibility of calibrating the lidar water vapor profiles in the absence of co-existing on-site soundings using water vapor profiles from the combined Advanced InfraRed Sounder (AIRS) and the Advanced Microwave Sounding Unit (AMSU) satellite product; the Aire Limitée Adaptation dynamique Développement INternational and High Resolution Limited Area Model (ALADIN/HIRLAM) numerical weather prediction (NWP) system, and the nearest radio sounding station located 100 km away from the lidar site (only for the permanent location of the lidar). The uncertainties of the calibration factor derived from the soundings, the satellite and the model data are instruments/model for the period of the campaigns. A good agreement is observed for all comparisons with relative errors that do not exceed 50 % up to 8 km altitude in most cases. A 4-year seasonal analysis of vertical water vapor is also presented for the Kuopio site in Finland. During winter months, the air in Kuopio is dry (1.15±0.40 g kg-1); during summer it is wet (5.54±1.02 g kg-1); and at other times, the air is in an intermediate state. These are averaged values over the lowest 2 km in the atmosphere. Above that height a quick decrease in water vapor mixing ratios is observed, except during summer months where favorable atmospheric conditions enable higher mixing ratio values at higher altitudes. Lastly, the seasonal change in disagreement between the lidar and the model has been studied. The analysis showed that, on average, the model underestimates water vapor mixing ratios at high altitudes during spring and summer.

  2. Climate change impact assessment on Veneto and Friuli plain groundwater. Part I: An integrated modeling approach for hazard scenario construction

    International Nuclear Information System (INIS)

    Baruffi, F.; Cisotto, A.; Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M.; Pretner, A.; Galli, A.; Scarinci, A.; Marsala, V.; Panelli, C.; Gualdi, S.; Bucchignani, E.; Torresan, S.; Pasini, S.; Critto, A.

    2012-01-01

    Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961–1990 and the projection period 2010–2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071–2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble

  3. Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.

    Science.gov (United States)

    Gunjan, Madhu Ranjan; Raj, Rishi

    2017-07-18

    The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among

  4. Integrated satellite InSAR and slope stability modeling to support hazard assessment at the Safuna Alta glacial lake, Peru

    Science.gov (United States)

    Cochachin, Alejo; Frey, Holger; Huggel, Christian; Strozzi, Tazio; Büechi, Emanuel; Cui, Fanpeng; Flores, Andrés; Saito, Carlos

    2017-04-01

    The Safuna glacial lakes (77˚ 37' W, 08˚ 50' S) are located in the headwater of the Tayapampa catchment, in the northernmost part of the Cordillera Blanca, Peru. The upper lake, Laguna Safuna Alta at 4354 m asl has formed in the 1960s behind a terminal moraine of the retreating Pucajirca Glacier, named after the peak south of the lakes. Safuna Alta currently has a volume of 15 x 106 m3. In 2002 a rock fall of several million m3 from the proximal left lateral moraine hit the Safuna Alta lake and triggered an impact wave which overtopped the moraine dam and passed into the lower lake, Laguna Safuna Baja, which absorbed most of the outburst flood from the upper lake, but nevertheless causing loss in cattle, degradation of agricultural land downstream and damages to a hydroelectric power station in Quitaracsa gorge. Event reconstructions showed that the impact wave in the Safuna Alta lake had a runup height of 100 m or more, and weakened the moraine dam of Safuna Alta. This fact, in combination with the large lake volumes and the continued possibility for landslides from the left proximal moraine pose a considerable risk for the downstream settlements as well as the recently completed Quitaracsa hydroelectric power plant. In the framework of a project funded by the European Space Agency (ESA), the hazard situation at the Safuna Alta lake is assessed by a combination of satellite radar data analysis, field investigations, and slope stability modeling. Interferometric analyses of the Synthetic Aperture Radar (InSAR) of ALOS-1 Palsar-1, ALOS-2 Palsar-2 and Sentinel-1 data from 2016 reveal terrain displacements of 2 cm y-1 in the detachment zone of the 2002 rock avalanche. More detailed insights into the characteristics of these terrain deformations are gained by repeat surveys with differential GPS (DGPS) and tachymetric measurements. A drone flight provides the information for the generation of a high-resolution digital elevation model (DEM), which is used for the

  5. Mathematical model to predict temperature profile and air–fuel equivalence ratio of a downdraft gasification process

    International Nuclear Information System (INIS)

    Jaojaruek, Kitipong

    2014-01-01

    Highlights: • A mathematical model based on finite computation analysis was developed. • Model covers all zones of gasification process which will be useful to improve gasifier design. • Model can predict temperature profile, feedstock consumption rate and reaction equivalent ratio (ϕ). • Model-predicted parameters fitted well with experimental values. - Abstract: A mathematical model for the entire length of a downdraft gasifier was developed using thermochemical principles to derive energy and mass conversion equations. Analysis of heat transfer (conduction, convection and radiation) and chemical kinetic technique were applied to predict the temperature profile, feedstock consumption rate (FCR) and reaction equivalence ratio (RER). The model will be useful for designing gasifiers, estimating output gas composition and gas production rate (GPR). Implicit finite difference method solved the equations on the considered reactor length (50 cm) and diameter (20 cm). Conversion criteria for calculation of temperature and feedstock consumption rate were 1 × 10 −6 °C and 1 × 10 −6 kg/h, respectively. Experimental validation showed that model outputs fitted well with experimental data. Maximum deviation between model and experimental data of temperature, FCR and RER were 52 °C at combustion temperature 663 °C, 0.7 kg/h at the rate 8.1 kg/h and 0.03 at the RER 0.42, respectively. Experimental uncertainty of temperature, FCR and RER were 24.4 °C, 0.71 kg/h and 0.04, respectively, on confidence level of 95%

  6. Probabilistic landslide hazards and risk mapping on Penang Island ...

    Indian Academy of Sciences (India)

    Probabilistic landslide hazards and risk mapping on Penang Island, Malaysia. 667. Figure 2. Landslide susceptibility map based on frequency ratio model. which is almost equal to 1. This result means that the landslide probability increases with the veg- etation index value. This could be due to more vegetation seen along ...

  7. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment

    Science.gov (United States)

    Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc

    2013-04-01

    Structural and mechanical analyses of rock mass are key components for rock slope stability assessment. The complementary use of photogrammetric techniques [Poropat, 2001] and coupled DFN-DEM models [Harthong et al., 2012] provides a methodology that can be applied to complex 3D configurations. DFN-DEM formulation [Scholtès & Donzé, 2012a,b] has been chosen for modeling since it can explicitly take into account the fracture sets. Analyses conducted in 3D can produce very complex and unintuitive failure mechanisms. Therefore, a modeling strategy must be established in order to identify the key features which control the stability. For this purpose, a realistic case is presented to show the overall methodology from the photogrammetry acquisition to the mechanical modeling. By combining Sirovision and YADE Open DEM [Kozicki & Donzé, 2008, 2009], it can be shown that even for large camera to rock slope ranges (tested about one kilometer), the accuracy of the data are sufficient to assess the role of the structures on the stability of a jointed rock slope. In this case, on site stereo pairs of 2D images were taken to create 3D surface models. Then, digital identification of structural features on the unstable block zone was processed with Sirojoint software [Sirovision, 2010]. After acquiring the numerical topography, the 3D digitalized and meshed surface was imported into the YADE Open DEM platform to define the studied rock mass as a closed (manifold) volume to define the bounding volume for numerical modeling. The discontinuities were then imported as meshed planar elliptic surfaces into the model. The model was then submitted to gravity loading. During this step, high values of cohesion were assigned to the discontinuities in order to avoid failure or block displacements triggered by inertial effects. To assess the respective role of the pre-existing discontinuities in the block stability, different configurations have been tested as well as different degree of

  9. Phenomenological approach to the modelling of elliptical galaxies: The problem of the mass-to-light ratio

    Directory of Open Access Journals (Sweden)

    Samurović S.

    2007-01-01

    Full Text Available In this paper the problem of the phenomenological modelling of elliptical galaxies using various available observational data is presented. Recently, Tortora, Cardona and Piedipalumbo (2007 suggested a double power law expression for the global cumulative mass-to-light ratio of elliptical galaxies. We tested their expression on a sample of ellipticals for which we have the estimates of the mass-to-light ratio beyond ~ 3 effective radii, a region where dark matter is expected to play an important dynamical role. We found that, for all the galaxies in our sample, we have α + β > 0, but that this does not necessarily mean a high dark matter content. The galaxies with higher mass (and higher dark matter content also have higher value of α+β. It was also shown that there is an indication that the galaxies with higher value of the effective radius also have higher dark matter content. .

  10. A Nonparametric Bayesian Approach to Seismic Hazard Modeling Using the ETAS Framework

    Science.gov (United States)

    Ross, G.

    2015-12-01

    The epidemic-type aftershock sequence (ETAS) model is one of the most popular tools for modeling seismicity and quantifying risk in earthquake-prone regions. Under the ETAS model, the occurrence times of earthquakes are treated as a self-exciting Poisson process where each earthquake briefly increases the probability of subsequent earthquakes occurring soon afterwards, which captures the fact that large mainshocks tend to produce long sequences of aftershocks. A triggering kernel controls the amount by which the probability increases based on the magnitude of each earthquake, and the rate at which it then decays over time. This triggering kernel is usually chosen heuristically, to match the parametric form of the modified Omori law for aftershock decay. However recent work has questioned whether this is an appropriate choice. Since the choice of kernel has a large impact on the predictions made by the ETAS model, avoiding misspecification is crucially important. We present a novel nonparametric version of ETAS which avoids making parametric assumptions, and instead learns the correct specification from the data itself. Our approach is based on the Dirichlet process, which is a modern class of Bayesian prior distribution which allows for efficient inference over an infinite dimensional space of functions. We show how our nonparametric ETAS model can be fit to data, and present results demonstrating that the fit is greatly improved compared to the standard parametric specification. Additionally, we explain how our model can be used to perform probabilistic declustering of earthquake catalogs, to classify earthquakes as being either aftershocks or mainshocks. and to learn the causal relations between pairs of earthquakes.

  11. Citizens' Perceptions of Flood Hazard Adjustments: An Application of the Protective Action Decision Model

    Science.gov (United States)

    Terpstra, Teun; Lindell, Michael K.

    2013-01-01

    Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…

  12. Development of Algal Interspecies Correlation Estimation Models for Chemical Hazard Assessment

    Science.gov (United States)

    Web-based Interspecies Correlation Estimation (ICE) is an application developed to predict the acute toxicity of a chemical from 1 species to another taxon. Web-ICE models use the acute toxicity value for a surrogate species to predict effect values for other species, thus potent...

  13. Hazard rate model and statistical analysis of a compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2005-01-01

    Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005

  14. Models for recurrent gas release event behavior in hazardous waste tanks

    International Nuclear Information System (INIS)

    Anderson, D.N.; Arnold, B.C.

    1994-08-01

    Certain radioactive waste storage tanks at the United States Department of Energy Hanford facilities continuously generate gases as a result of radiolysis and chemical reactions. The congealed sludge in these tanks traps the gases and causes the level of the waste within the tanks to rise. The waste level continues to rise until the sludge becomes buoyant and ''rolls over'', changing places with heavier fluid on top. During a rollover, the trapped gases are released, resulting, in a sudden drop in the waste level. This is known as a gas release event (GRE). After a GRE, the wastes leading to another GRE. We present nonlinear time waste re-congeals and gas again accumulates leading to another GRE. We present nonlinear time series models that produce simulated sample paths that closely resemble the temporal history of waste levels in these tanks. The models also imitate the random GRE, behavior observed in the temporal waste level history of a storage tank. We are interested in using the structure of these models to understand the probabilistic behavior of the random variable ''time between consecutive GRE's''. Understanding the stochastic nature of this random variable is important because the hydrogen and nitrous oxide gases released from a GRE, are flammable and the ammonia that is released is a health risk. From a safety perspective, activity around such waste tanks should be halted when a GRE is imminent. With credible GRE models, we can establish time windows in which waste tank research and maintenance activities can be safely performed

  15. Life-Stage Physiologically-Based Pharmacokinetic (PBPK) Model Applications to Screen Environmental Hazards.

    Science.gov (United States)

    This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...

  16. Tsunami Hazard in La Réunion Island (SW Indian Ocean): Scenario-Based Numerical Modelling on Vulnerable Coastal Sites

    Science.gov (United States)

    Allgeyer, S.; Quentel, É.; Hébert, H.; Gailler, A.; Loevenbruck, A.

    2017-08-01

    Several major tsunamis have affected the southwest Indian Ocean area since the 2004 Sumatra event, and some of them (2005, 2006, 2007 and 2010) have hit La Réunion Island in the southwest Indian Ocean. However, tsunami hazard is not well defined for La Réunion Island where vulnerable coastlines can be exposed. This study offers a first tsunami hazard assesment for La Réunion Island. We first review the historical tsunami observations made on the coastlines, where high tsunami waves (2-3 m) have been reported on the western coast, especially during the 2004 Indian Ocean tsunami. Numerical models of historical scenarios yield results consistent with available observations on the coastal sites (the harbours of La Pointe des Galets and Saint-Paul). The 1833 Pagai earthquake and tsunami can be considered as the worst-case historical scenario for this area. In a second step, we assess the tsunami exposure by covering the major subduction zones with syntethic events of constant magnitude (8.7, 9.0 and 9.3). The aggregation of magnitude 8.7 scenarios all generate strong currents in the harbours (3-7 m s^{-1}) and about 2 m of tsunami maximum height without significant inundation. The analysis of the magnitude 9.0 events confirms that the main commercial harbour (Port Est) is more vulnerable than Port Ouest and that flooding in Saint-Paul is limited to the beach area and the river mouth. Finally, the magnitude 9.3 scenarios show limited inundations close to the beach and in the riverbed in Saint-Paul. More generally, the results confirm that for La Runion, the Sumatra subduction zone is the most threatening non-local source area for tsunami generation. This study also shows that far-field coastal sites should be prepared for tsunami hazard and that further work is needed to improve operational warning procedures. Forecast methods should be developed to provide tools to enable the authorities to anticipate the local effects of tsunamis and to evacuate the harbours in

  17. Hazardous Chemicals

    Centers for Disease Control (CDC) Podcasts

    2007-04-10

    Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure.  Created: 4/10/2007 by CDC National Center for Environmental Health.   Date Released: 4/13/2007.

  18. Modelling of the impact of the Rhone River N:P ratios over the NW Mediterranean planktonic food web

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Carlotti, François

    2016-04-01

    The origin of the high N:P ratios in the Mediterranean Sea is one of the remaining important questions raised by the scientific community. During the last two decades it was observed that the inorganic ratio NO3:PO4 ratio in major Mediterranean rivers including the Rhone River has dramatically increased, thereby strengthening the P-limitation in the Mediterranean waters (Ludwig et al, 2009, The MerMex group, 2011) and, as a result, increasing the anomaly in the ratio NO3:PO4 of the Gulf of Lions (GoL) and in all the western part of NW Mediterranean. The N:P ratios in seawater and in the metabolic requirements for plankton growth are indeed of particular interest, as these proportions determine which nutrient will limit biological productivity at the base of the food web and may select plankton communities with distinct biogeochemical function (Deutsch &Weber, 2012). In this context, an in the same spirit as the study of Parsons & Lalli (2002), an interesting question is whether high NO3:PO4 ratios in sea water can favor dead-end gelatinous food chains to the detriment of chains producing fish or direct food for fish . More generally, we aim at characterizing the impact of changes in the NO3:PO4 ratio on the structure of the planktonic food web in the Mediterranean Sea. Coupled physical-biogeochemical modeling with the Eco3M-MED biogeochemical model (Baklouti et al., 2006a,b, Alekseenko et al., 2014) coupled with the hydrodynamic model MARS3D (Lazure&Dumas, 2008) is used to investigate the impact of Rhone River inputs on the structure of the first levels of the trophic web of the NW Mediterranean Sea. The fact that the model describes each biogenic compartment in terms of its abundance (for organisms), and carbon, phosphorus, nitrogen and chlorophyll (for autotrophs) contents means that the intracellular quotas and ratios of each organism can be calculated at any time. This provides information on the intracellular status of organisms, on the elements that limit

  19. Development of evaluation method for software hazard identification techniques

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-01-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  20. Analysis of risk indicators and issues associated with applications of screening model for hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Buck, J.W.; Strenge, D.L.; Droppo, J.G. Jr.

    1990-12-01

    Risk indicators, such as population risk, maximum individual risk, time of arrival of contamination, and maximum water concentrations, were analyzed to determine their effect on results from a screening model for hazardous and radioactive waste sites. The analysis of risk indicators is based on calculations resulting from exposure to air and waterborne contamination predicted with Multimedia Environmental Pollutant Assessment System (MEPAS) model. The different risk indicators were analyzed, based on constituent type and transport and exposure pathways. Three of the specific comparisons that were made are (1) population-based versus maximum individual-based risk indicators, (2) time of arrival of contamination, and (3) comparison of different threshold assumptions for noncarcinogenic impacts. Comparison of indicators for population- and maximum individual-based human health risk suggests that these two parameters are highly correlated, but for a given problem, one may be more important than the other. The results indicate that the arrival distribution for different levels of contamination reaching a receptor can also be helpful in decisions regarding the use of resources for remediating short- and long-term environmental problems. The addition of information from a linear model for noncarcinogenic impacts allows interpretation of results below the reference dose (RfD) levels that might help in decisions for certain applications. The analysis of risk indicators suggests that important information may be lost by the use of a single indicator to represent public health risk and that multiple indicators should be considered. 15 refs., 8 figs., 1 tab

  1. A dynamic approach for the impact of a toxic gas dispersion hazard considering human behaviour and dispersion modelling.

    Science.gov (United States)

    Lovreglio, Ruggiero; Ronchi, Enrico; Maragkos, Georgios; Beji, Tarek; Merci, Bart

    2016-11-15

    The release of toxic gases due to natural/industrial accidents or terrorist attacks in populated areas can have tragic consequences. To prevent and evaluate the effects of these disasters different approaches and modelling tools have been introduced in the literature. These instruments are valuable tools for risk managers doing risk assessment of threatened areas. Despite the significant improvements in hazard assessment in case of toxic gas dispersion, these analyses do not generally include the impact of human behaviour and people movement during emergencies. This work aims at providing an approach which considers both modelling of gas dispersion and evacuation movement in order to improve the accuracy of risk assessment for disasters involving toxic gases. The approach is applied to a hypothetical scenario including a ship releasing Nitrogen dioxide (NO2) on a crowd attending a music festival. The difference between the results obtained with existing static methods (people do not move) and a dynamic approach (people move away from the danger) which considers people movement with different degrees of sophistication (either a simple linear path or more complex behavioural modelling) is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A mechanistic modelling and data assimilation approach to estimate the carbon/chlorophyll and carbon/nitrogen ratios in a coupled hydrodynamical-biological model