WorldWideScience

Sample records for model hazard ratio

  1. Mark-specific hazard ratio model with missing multivariate marks.

    Science.gov (United States)

    Juraska, Michal; Gilbert, Peter B

    2016-10-01

    An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial.

  2. The Additive Hazard Mixing Models

    Institute of Scientific and Technical Information of China (English)

    Ping LI; Xiao-liang LING

    2012-01-01

    This paper is concerned with the aging and dependence properties in the additive hazard mixing models including some stochastic comparisons.Further,some useful bounds of reliability functions in additive hazard mixing models are obtained.

  3. Identifying and modeling safety hazards

    Energy Technology Data Exchange (ETDEWEB)

    DANIELS,JESSE; BAHILL,TERRY; WERNER,PAUL W.

    2000-03-29

    The hazard model described in this paper is designed to accept data over the Internet from distributed databases. A hazard object template is used to ensure that all necessary descriptors are collected for each object. Three methods for combining the data are compared and contrasted. Three methods are used for handling the three types of interactions between the hazard objects.

  4. ESTIMATING HAZARD RATIOS IN NESTED CASE-CONTROL STUDIES BY MANTEL-HAENSZEL METHOD

    Institute of Scientific and Technical Information of China (English)

    张忠占

    2001-01-01

    In this article, a class of Mantel-Haenszel type estimators of hazard ratios in proportional hazards model is presented for simple nested case-control study. The estimators have the form of the Mantel-Haenszel estimator of odds ratios, and it is shown that the estimators are dually consistent, and asymptotically normal. Dually consistently estimated covariance matrices of the proposed estimators are also developed. An example is given to illustrate the estimators.

  5. Biostatistics primer: what a clinician ought to know: hazard ratios.

    Science.gov (United States)

    Barraclough, Helen; Simms, Lorinda; Govindan, Ramaswamy

    2011-06-01

    Hazard ratios (HRs) are used commonly to report results from randomized clinical trials in oncology. However, they remain one of the most perplexing concepts for clinicians. A good understanding of HRs is needed to effectively interpret the medical literature to make important treatment decisions. This article provides clear guidelines to clinicians about how to appropriately interpret HRs. While this article focuses on the commonly used methods, the authors acknowledge that other statistical methods exist for analyzing survival data.

  6. Two models for evaluating landslide hazards

    Science.gov (United States)

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  7. Computer Model Locates Environmental Hazards

    Science.gov (United States)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  8. Models of volcanic eruption hazards

    Energy Technology Data Exchange (ETDEWEB)

    Wohletz, K.H.

    1992-01-01

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.

  9. Crossing Hazard Functions in Common Survival Models.

    Science.gov (United States)

    Zhang, Jiajia; Peng, Yingwei

    2009-10-15

    Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.

  10. Modeling lahar behavior and hazards

    Science.gov (United States)

    Manville, Vernon; Major, Jon J.; Fagents, Sarah A.

    2013-01-01

    Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.

  11. Flash Flood Hazard Susceptibility Mapping Using Frequency Ratio and Statistical Index Methods in Coalmine Subsidence Areas

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2016-09-01

    Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.

  12. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  13. On confidence intervals for the hazard ratio in randomized clinical trials.

    Science.gov (United States)

    Lin, Dan-Yu; Dai, Luyan; Cheng, Gang; Sailer, Martin Oliver

    2016-12-01

    The log-rank test is widely used to compare two survival distributions in a randomized clinical trial, while partial likelihood (Cox, 1975) is the method of choice for making inference about the hazard ratio under the Cox (1972) proportional hazards model. The Wald 95% confidence interval of the hazard ratio may include the null value of 1 when the p-value of the log-rank test is less than 0.05. Peto et al. (1977) provided an estimator for the hazard ratio based on the log-rank statistic; the corresponding 95% confidence interval excludes the null value of 1 if and only if the p-value of the log-rank test is less than 0.05. However, Peto's estimator is not consistent, and the corresponding confidence interval does not have correct coverage probability. In this article, we construct the confidence interval by inverting the score test under the (possibly stratified) Cox model, and we modify the variance estimator such that the resulting score test for the null hypothesis of no treatment difference is identical to the log-rank test in the possible presence of ties. Like Peto's method, the proposed confidence interval excludes the null value if and only if the log-rank test is significant. Unlike Peto's method, however, this interval has correct coverage probability. An added benefit of the proposed confidence interval is that it tends to be more accurate and narrower than the Wald confidence interval. We demonstrate the advantages of the proposed method through extensive simulation studies and a colon cancer study.

  14. DYNAMIC TEACHING RATIO PEDAGOGIC MODEL

    Directory of Open Access Journals (Sweden)

    Chen Jiaying

    2010-11-01

    Full Text Available This paper outlines an innovative pedagogic model, Dynamic Teaching Ratio (DTR Pedagogic Model, for learning design and teaching strategy aimed at the postsecondary technical education. The model draws on the theory of differential learning, which is widely recognized as an important tool for engaging students and addressing the individual needs of all students. The DTR model caters to the different abilities, interest or learning needs of students and provides different learning approaches based on a student’s learning ability. The model aims to improve students’ academic performance through increasing the lecturer-to-student ratio in the classroom setting. An experimental case study on the model was conducted and the outcome was favourable. Hence, a large-scale implementation was carried out upon the successful trial run. The paper discusses the methodology of the model and its application through the case study and the large-scale implementation.

  15. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  16. Parametric Regression Models Using Reversed Hazard Rates

    Directory of Open Access Journals (Sweden)

    Asokan Mulayath Variyath

    2014-01-01

    Full Text Available Proportional hazard regression models are widely used in survival analysis to understand and exploit the relationship between survival time and covariates. For left censored survival times, reversed hazard rate functions are more appropriate. In this paper, we develop a parametric proportional hazard rates model using an inverted Weibull distribution. The estimation and construction of confidence intervals for the parameters are discussed. We assess the performance of the proposed procedure based on a large number of Monte Carlo simulations. We illustrate the proposed method using a real case example.

  17. Hazard Warning: model misuse ahead

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Payne, Mark; Trenkel, V.

    2014-01-01

    The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based......-users (i.e. mangers or policy developers). The combination of attributes leads to models that are considered to have empirical, mechanistic, or analytical characteristics, but not a combination of them. In fisheries science, many examples can be found of models with these characteristics. However, we...

  18. Virtual Research Environments for Natural Hazard Modelling

    Science.gov (United States)

    Napier, Hazel; Aldridge, Tim

    2017-04-01

    The Natural Hazards Partnership (NHP) is a group of 17 collaborating public sector organisations providing a mechanism for co-ordinated advice to government and agencies responsible for civil contingency and emergency response during natural hazard events. The NHP has set up a Hazard Impact Model (HIM) group tasked with modelling the impact of a range of UK hazards with the aim of delivery of consistent hazard and impact information. The HIM group consists of 7 partners initially concentrating on modelling the socio-economic impact of 3 key hazards - surface water flooding, land instability and high winds. HIM group partners share scientific expertise and data within their specific areas of interest including hydrological modelling, meteorology, engineering geology, GIS, data delivery, and modelling of socio-economic impacts. Activity within the NHP relies on effective collaboration between partners distributed across the UK. The NHP are acting as a use case study for a new Virtual Research Environment (VRE) being developed by the EVER-EST project (European Virtual Environment for Research - Earth Science Themes: a solution). The VRE is allowing the NHP to explore novel ways of cooperation including improved capabilities for e-collaboration, e-research, automation of processes and e-learning. Collaboration tools are complemented by the adoption of Research Objects, semantically rich aggregations of resources enabling the creation of uniquely identified digital artefacts resulting in reusable science and research. Application of the Research Object concept to HIM development facilitates collaboration, by encapsulating scientific knowledge in a shareable format that can be easily shared and used by partners working on the same model but within their areas of expertise. This paper describes the application of the VRE to the NHP use case study. It outlines the challenges associated with distributed partnership working and how they are being addressed in the VRE. A case

  19. Estimating the relative hazard by the ratio of logarithms of event-free proportions.

    Science.gov (United States)

    Perneger, Thomas V

    2008-09-01

    Clinical trials typically examine associations between an intervention and the occurrence of a clinical event. The association is often reported as a relative risk, more rarely as an odds ratio. Unfortunately, when the scientific interest lies with the ratio of incidence rates, both these statistics are inaccurate: the odds ratio is too extreme, and the relative risk too conservative. These biases are particularly strong when the outcomes are common. This paper describes an alternative statistic, the ratio of logarithms of event-free proportions (or relative log survival), which is simple to compute yet unbiased vis-à-vis the relative hazard. A formula to compute the sampling error of this statistic is also provided. Multivariate analysis can be conducted using complementary log-log regression. Precise knowledge of event occurrence times is not required for these analyses. Relative log survival may be particularly useful for meta-analyses of trials in which the proportion of events varies between studies.

  20. A ratio model of perceptual transparency.

    Science.gov (United States)

    Tommasi, M

    1999-12-01

    A ratio model of the achromatic transparency of a phenomenal surface on a bipartite background is proposed. The model asserts that transparency corresponds to the evaluation of the ratio of the lightness difference inside the transparent surface to the difference in reference lightness inside the background. It applies to both balanced and unbalanced transparency. The ratio model was compared experimentally with the previous perceptual model of achromatic transparency proposed by Metelli. Each model was tested by comparing the rated with the predicted transparency. Analysis shows that the ratio model provides better predictions of transparency than those provided by Metelli's model.

  1. Proportional hazards models with discrete frailty.

    Science.gov (United States)

    Caroni, Chrys; Crowder, Martin; Kimber, Alan

    2010-07-01

    We extend proportional hazards frailty models for lifetime data to allow a negative binomial, Poisson, Geometric or other discrete distribution of the frailty variable. This might represent, for example, the unknown number of flaws in an item under test. Zero frailty corresponds to a limited failure model containing a proportion of units that never fail (long-term survivors). Ways of modifying the model to avoid this are discussed. The models are illustrated on a previously published set of data on failures of printed circuit boards and on new data on breaking strengths of samples of cord.

  2. Experimental Concepts for Testing Seismic Hazard Models

    Science.gov (United States)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  3. The median hazard ratio: a useful measure of variance and general contextual effects in multilevel survival analysis.

    Science.gov (United States)

    Austin, Peter C; Wagner, Philippe; Merlo, Juan

    2017-03-15

    Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  5. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  6. Development of hazard-compatible building fragility and vulnerability models

    Science.gov (United States)

    Karaca, E.; Luco, N.

    2008-01-01

    We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.

  7. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  8. Satellite image collection modeling for large area hazard emergency response

    Science.gov (United States)

    Liu, Shufan; Hodgson, Michael E.

    2016-08-01

    Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.

  9. Coordinate descent methods for the penalized semiparametric additive hazards model

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    . The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...

  10. Coordinate descent methods for the penalized semiprarametric additive hazard model

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2012-01-01

    . The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...

  11. Uncertainty and Probability in Natural Hazard Assessment and Their Role in the Testability of Hazard Models

    Science.gov (United States)

    Marzocchi, Warner; Jordan, Thomas

    2014-05-01

    Probabilistic assessment has become a widely accepted procedure to estimate quantitatively natural hazards. In essence probabilities are meant to quantify the ubiquitous and deep uncertainties that characterize the evolution of natural systems. However, notwithstanding the very wide use of the terms 'uncertainty' and 'probability' in natural hazards, the way in which they are linked, how they are estimated and their scientific meaning are far from being clear, as testified by the last Intergovernmental Panel on Climate Change (IPCC) report and by its subsequent review. The lack of a formal framework to interpret uncertainty and probability coherently has paved the way for some of the strongest critics of hazard analysis; in fact, it has been argued that most of natural hazard analyses are intrinsically 'unscientific'. For example, among the concerns is the use of expert opinion to characterize the so-called epistemic uncertainties; many have argued that such personal degrees of belief cannot be measured and, by implication, cannot be tested. The purpose of this talk is to confront and clarify the conceptual issues associated with the role of uncertainty and probability in natural hazard analysis and the conditions that make a hazard model testable and then 'scientific'. Specifically, we show that testability of hazard models requires a suitable taxonomy of uncertainty embedded in a proper logical framework. This taxonomy of uncertainty is composed by aleatory variability, epistemic uncertainty, and ontological error. We discuss their differences, the link with the probability, and their estimation using data, models, and subjective expert opinion. We show that these different uncertainties, and the testability of hazard models, can be unequivocally defined only for a well-defined experimental concept that is a concept external to the model under test. All these discussions are illustrated through simple examples related to the probabilistic seismic hazard analysis.

  12. Lahar Hazard Modeling at Tungurahua Volcano, Ecuador

    Science.gov (United States)

    Sorensen, O. E.; Rose, W. I.; Jaya, D.

    2003-04-01

    lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.

  13. A quantitative model for volcanic hazard assessment

    OpenAIRE

    W. Marzocchi; Sandri, L.; Furlan, C

    2006-01-01

    Volcanic hazard assessment is a basic ingredient for risk-based decision-making in land-use planning and emergency management. Volcanic hazard is defined as the probability of any particular area being affected by a destructive volcanic event within a given period of time (Fournier d’Albe 1979). The probabilistic nature of such an important issue derives from the fact that volcanic activity is a complex process, characterized by several and usually unknown degrees o...

  14. Parametric hazard rate models for long-term sickness absence

    NARCIS (Netherlands)

    Koopmans, Petra C.; Roelen, Corne A. M.; Groothoff, Johan W.

    2009-01-01

    In research on the time to onset of sickness absence and the duration of sickness absence episodes, Cox proportional hazard models are in common use. However, parametric models are to be preferred when time in itself is considered as independent variable. This study compares parametric hazard rate m

  15. Parametric hazard rate models for long-term sickness absence

    NARCIS (Netherlands)

    Koopmans, Petra C.; Roelen, Corne A. M.; Groothoff, Johan W.

    2009-01-01

    In research on the time to onset of sickness absence and the duration of sickness absence episodes, Cox proportional hazard models are in common use. However, parametric models are to be preferred when time in itself is considered as independent variable. This study compares parametric hazard rate m

  16. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  17. Incident duration modeling using flexible parametric hazard-based models.

    Science.gov (United States)

    Li, Ruimin; Shang, Pan

    2014-01-01

    Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  18. Nonlinear trading models through Sharpe Ratio maximization.

    Science.gov (United States)

    Choey, M; Weigend, A S

    1997-08-01

    While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.

  19. Additive Hazard Regression Models: An Application to the Natural History of Human Papillomavirus

    Directory of Open Access Journals (Sweden)

    Xianhong Xie

    2013-01-01

    Full Text Available There are several statistical methods for time-to-event analysis, among which is the Cox proportional hazards model that is most commonly used. However, when the absolute change in risk, instead of the risk ratio, is of primary interest or when the proportional hazard assumption for the Cox proportional hazards model is violated, an additive hazard regression model may be more appropriate. In this paper, we give an overview of this approach and then apply a semiparametric as well as a nonparametric additive model to a data set from a study of the natural history of human papillomavirus (HPV in HIV-positive and HIV-negative women. The results from the semiparametric model indicated on average an additional 14 oncogenic HPV infections per 100 woman-years related to CD4 count < 200 relative to HIV-negative women, and those from the nonparametric additive model showed an additional 40 oncogenic HPV infections per 100 women over 5 years of followup, while the estimated hazard ratio in the Cox model was 3.82. Although the Cox model can provide a better understanding of the exposure disease association, the additive model is often more useful for public health planning and intervention.

  20. Physical vulnerability modelling in natural hazard risk assessment

    Science.gov (United States)

    Douglas, J.

    2007-04-01

    An evaluation of the risk to an exposed element from a hazardous event requires a consideration of the element's vulnerability, which expresses its propensity to suffer damage. This concept allows the assessed level of hazard to be translated to an estimated level of risk and is often used to evaluate the risk from earthquakes and cyclones. However, for other natural perils, such as mass movements, coastal erosion and volcanoes, the incorporation of vulnerability within risk assessment is not well established and consequently quantitative risk estimations are not often made. This impedes the study of the relative contributions from different hazards to the overall risk at a site. Physical vulnerability is poorly modelled for many reasons: the cause of human casualties (from the event itself rather than by building damage); lack of observational data on the hazard, the elements at risk and the induced damage; the complexity of the structural damage mechanisms; the temporal and geographical scales; and the ability to modify the hazard level. Many of these causes are related to the nature of the peril therefore for some hazards, such as coastal erosion, the benefits of considering an element's physical vulnerability may be limited. However, for hazards such as volcanoes and mass movements the modelling of vulnerability should be improved by, for example, following the efforts made in earthquake risk assessment. For example, additional observational data on induced building damage and the hazardous event should be routinely collected and correlated and also numerical modelling of building behaviour during a damaging event should be attempted.

  1. Integrated Modeling for Flood Hazard Mapping Using Watershed Modeling System

    Directory of Open Access Journals (Sweden)

    Seyedeh S. Sadrolashrafi

    2008-01-01

    Full Text Available In this stduy, a new framework which integrates the Geographic Information System (GIS with the Watershed Modeling System (WMS for flood modeling is developed. It also interconnects the terrain models and the GIS software, with commercial standard hydrological and hydraulic models, including HEC-1, HEC-RAS, etc. The Dez River Basin (about 16213 km2 in Khuzestan province, IRAN, is domain of study because of occuring frequent severe flash flooding. As a case of study, a major flood in autumn of 2001 is chosen to examine the modeling framework. The model consists of a rainfall-runoff model (HEC-1 that converts excess precipitation to overland flow and channel runoff and a hydraulic model (HEC-RAS that simulates steady state flow through the river channel network based on the HEC-1, peak hydrographs. In addition, it delineates the maps of potential flood zonation for the Dez River Basin. These are achieved based on the state of the art GIS with using WMS software. Watershed parameters are calibrated manually to perform a good simulation of discharge at three sub-basins. With the calibrated discharge, WMS is capable of producing flood hazard map. The modeling framework presented in this study demonstrates the accuracy and usefulness of the WMS software for flash flooding control. The results of this research will benefit future modeling efforts by providing validate hydrological software to forecast flooding on a regional scale. This model designed for the Dez River Basin, while this regional scale model may be used as a prototype for model applications in other areas.

  2. Coordinate descent methods for the penalized semiprarametric additive hazard model

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2012-01-01

    For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity....... The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires...

  3. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  4. Ratio-model for the simulation of infrared spectra of pollution gases in complicated background

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A ratio-model for the computer simulation of infrared spectra of pollution gases in complicated background is proposed. The characteristic spectrum of the hazardous pollution gas is simulated with background spectra which are measured by passive Fourier transform infrared spectrometer. The simulated results agree well with the experimental results.

  5. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  6. A conflict model for the international hazardous waste disposal dispute

    Energy Technology Data Exchange (ETDEWEB)

    Hu Kaixian, E-mail: k2hu@engmail.uwaterloo.ca [Department of Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1 (Canada); Hipel, Keith W., E-mail: kwhipel@uwaterloo.ca [Department of Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1 (Canada); Fang, Liping, E-mail: lfang@ryerson.ca [Department of Mechanical and Industrial Engineering, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada)

    2009-12-15

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  7. Hazard identification by extended multilevel flow modelling with function roles

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Jørgensen, Sten Bay

    2014-01-01

    HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of th e systems. In this paper, a HAZOP reasoning method based on function-oriented modelling, multilevel flow modelling (MFM) i...

  8. SCIPUFF - a generalized hazard dispersion model

    Energy Technology Data Exchange (ETDEWEB)

    Sykes, R.I.; Henn, D.S.; Parker, S.F.; Gabruk, R.S. [Titan Research and Technology, Princeton, NJ (United States)

    1996-12-31

    One of the more popular techniques for efficiently representing the dispersion process is the Gaussian puff model, which uses a collection of Lagrangian puffs with Gaussian concentration profiles. SCIPUFF (Second-order Closure Integrated Puff) is an advanced Gaussian puff model. SCIPUFF which uses second-order turbulence closure techniques to relate the dispersion rates to measurable turbulent velocity statistics, providing a wide range of applicability. In addition, the closure model provides a prediction of the statistical variance in the concentration field which can be used to estimate the uncertainty in the dispersion prediction resulting from the inherent uncertainty in the wind field. SCIPUFF has been greatly extended from a power plant plume model to describe more general source characteristics, material properties, and longer range dispersion. In addition, a Graphical User Interface has been developed to provide interactive problem definition and output display. This presentation describes the major features of the model, and presents several example calculations.

  9. Regional landslide hazard assessment based on Distance Evaluation Model

    Institute of Scientific and Technical Information of China (English)

    Jiacun LI; Yan QIN; Jing LI

    2008-01-01

    There are many factors influencing landslide occurrence. The key for landslide control is to confirm the regional landslide hazard factors. The Cameron Highlands of Malaysia was selected as the study area. By bivariate statistical analysis method with GIS software the authors analyzed the relationships among landslides and environmental factors such as lithology, geomorphy, elevation, road and land use. Distance Evaluation Model was developed with Landslide Density(LD). And the assessment of landslide hazard of Cameron Highlands was performed. The result shows that the model has higher prediction precision.

  10. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  11. TsuPy: Computational robustness in Tsunami hazard modelling

    Science.gov (United States)

    Schäfer, Andreas M.; Wenzel, Friedemann

    2017-05-01

    Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.

  12. Hazard Response Modeling Uncertainty (A Quantitative Method)

    Science.gov (United States)

    1988-10-01

    ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be

  13. Toward Building a New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  14. Application of remote sensed precipitation for landslide hazard assessment models

    Science.gov (United States)

    Kirschbaum, D. B.; Peters-Lidard, C. D.; Adler, R. F.; Kumar, S.; Harrison, K.

    2010-12-01

    The increasing availability of remotely sensed land surface and precipitation information provides new opportunities to improve upon existing landslide hazard assessment methods. This research considers how satellite precipitation information can be applied in two types of landslide hazard assessment frameworks: a global, landslide forecasting framework and a deterministic slope-stability model. Examination of both landslide hazard frameworks points to the need for higher resolution spatial and temporal precipitation inputs to better identify small-scale precipitation forcings that contribute to significant landslide triggering. This research considers how satellite precipitation information may be downscaled to account for local orographic impacts and better resolve peak intensities. Precipitation downscaling is employed in both models to better approximate local rainfall distribution, antecedent conditions, and intensities. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale and have the potential to significantly advance landslide hazard assessment tools. The first landslide forecasting tool, running in near real-time at http://trmm.gsfc.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. Results of the algorithm evaluation indicate that considering higher resolution susceptibility information is a key factor in better resolving potentially hazardous areas. However, success in resolving when landslide activity is probable is closely linked to appropriate characterization of the empirical rainfall intensity-duration thresholds. We test a variety of rainfall thresholds to evaluate algorithmic performance accuracy and determine the optimal set of conditions that

  15. [Clinical research XXII. From clinical judgment to Cox proportional hazards model].

    Science.gov (United States)

    Pérez-Rodríguez, Marcela; Rivas-Ruiz, Rodolfo; Palacios-Cruz, Lino; Talavera, Juan O

    2014-01-01

    Survival analyses are commonly used to determine the time of an event (for example, death). However, they can be used also for other clinical outcomes on the condition that these are dichotomous, for example healing time. These analyses only consider the relationship of one variable. However, Cox proportional hazards model is a multivariate analysis of the survival analysis, in which other potentially confounding covariates of the effect of the main maneuver studied, such as age, gender or disease stage, are taken into account. This analysis can include both quantitative and qualitative variables in the model. The measure of association used is called hazard ratio (HR) or relative risk ratio, which is not the same as the relative risk or odds ratio (OR). The difference is that the HR refers to the possibility that one of the groups develops the event before it is compared with the other group. The proportional hazards multivariate model of Cox is the most widely used in medicine when the phenomenon is studied in two dimensions: time and event.

  16. Current Methods of Natural Hazards Communication used within Catastrophe Modelling

    Science.gov (United States)

    Dawber, C.; Latchman, S.

    2012-04-01

    In the field of catastrophe modelling, natural hazards need to be explained every day to (re)insurance professionals so that they may understand estimates of the loss potential of their portfolio. The effective communication of natural hazards to city professionals requires different strategies to be taken depending on the audience, their prior knowledge and respective backgrounds. It is best to have at least three sets of tools in your arsenal for a specific topic, 1) an illustration/animation, 2) a mathematical formula and 3) a real world case study example. This multi-faceted approach will be effective for those that learn best by pictorial means, mathematical means or anecdotal means. To show this we will use a set of real examples employed in the insurance industry of how different aspects of natural hazards and the uncertainty around them are explained to city professionals. For example, explaining the different modules within a catastrophe model such as the hazard, vulnerability and loss modules. We highlight how recent technology such as 3d plots, video recording and Google Earth maps, when used properly can help explain concepts quickly and easily. Finally we also examine the pitfalls of using overly-complicated visualisations and in general how counter-intuitive deductions may be made.

  17. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    Science.gov (United States)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  18. Rockfall hazard analysis using LiDAR and spatial modeling

    Science.gov (United States)

    Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho

    2010-05-01

    Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.

  19. Random weighting method for Cox’s proportional hazards model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Variance of parameter estimate in Cox’s proportional hazards model is based on asymptotic variance. When sample size is small, variance can be estimated by bootstrap method. However, if censoring rate in a survival data set is high, bootstrap method may fail to work properly. This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations. This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model. This method, unlike the bootstrap method, does not lead to more severe censoring than the original sample does. Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions. Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.

  20. Random weighting method for Cox's proportional hazards model

    Institute of Scientific and Technical Information of China (English)

    CUI WenQuan; LI Kai; YANG YaNing; WU YueHua

    2008-01-01

    Variance of parameter estimate in Cox's proportional hazards model is based on asymptotic variance.When sample size is small,variance can be estimated by bootstrap method.However,if censoring rate in a survival data set is high,bootstrap method may fail to work properly.This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations.This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model.This method,unlike the bootstrap method,does not lead to more severe censoring than the original sample does.Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions.Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.

  1. Defaultable Game Options in a Hazard Process Model

    Directory of Open Access Journals (Sweden)

    Tomasz R. Bielecki

    2009-01-01

    Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.

  2. High-dimensional additive hazard models and the Lasso

    CERN Document Server

    Gaïffas, Séphane

    2011-01-01

    We consider a general high-dimensional additive hazard model in a non-asymptotic setting, including regression for censored-data. In this context, we consider a Lasso estimator with a fully data-driven $\\ell_1$ penalization, which is tuned for the estimation problem at hand. We prove sharp oracle inequalities for this estimator. Our analysis involves a new "data-driven" Bernstein's inequality, that is of independent interest, where the predictable variation is replaced by the optional variation.

  3. Recent Experiences in Aftershock Hazard Modelling in New Zealand

    Science.gov (United States)

    Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.

    2014-12-01

    The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock

  4. Integrating Community Volcanic Hazard Mapping, Geographic Information Systems, and Modeling to Reduce Volcanic Hazard Vulnerability

    Science.gov (United States)

    Bajo Sanchez, Jorge V.

    This dissertation is composed of an introductory chapter and three papers about vulnerability and volcanic hazard maps with emphasis on lahars. The introductory chapter reviews definitions of the term vulnerability by the social and natural hazard community and it provides a new definition of hazard vulnerability that includes social and natural hazard factors. The first paper explains how the Community Volcanic Hazard Map (CVHM) is used for vulnerability analysis and explains in detail a new methodology to obtain valuable information about ethnophysiographic differences, hazards, and landscape knowledge of communities in the area of interest: the Canton Buenos Aires situated on the northern flank of the Santa Ana (Ilamatepec) Volcano, El Salvador. The second paper is about creating a lahar hazard map in data poor environments by generating a landslide inventory and obtaining potential volumes of dry material that can potentially be carried by lahars. The third paper introduces an innovative lahar hazard map integrating information generated by the previous two papers. It shows the differences in hazard maps created by the communities and experts both visually as well as quantitatively. This new, integrated hazard map was presented to the community with positive feedback and acceptance. The dissertation concludes with a summary chapter on the results and recommendations.

  5. Mathematical-statistical models of generated hazardous hospital solid waste.

    Science.gov (United States)

    Awad, A R; Obeidat, M; Al-Shareef, M

    2004-01-01

    This research work was carried out under the assumption that wastes generated from hospitals in Irbid, Jordan were hazardous. The hazardous and non-hazardous wastes generated from the different divisions in the three hospitals under consideration were not separated during collection process. Three hospitals, Princess Basma hospital (public), Princess Bade'ah hospital (teaching), and Ibn Al-Nafis hospital (private) in Irbid were selected for this study. The research work took into account the amounts of solid waste accumulated from each division and also determined the total amount generated from each hospital. The generation rates were determined (kilogram per patient, per day; kilogram per bed, per day) for the three hospitals. These generation rates were compared with similar hospitals in Europe. The evaluation suggested that the current situation regarding the management of these wastes in the three studied hospitals needs revision as these hospitals do not follow methods of waste disposals that would reduce risk to human health and the environment practiced in developed countries. Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching, private). In these models number of patients, beds, and type of hospital were revealed to be significant factors on quantity of waste generated. Multiple regressions were also used to estimate the quantities of wastes generated from similar divisions in the three hospitals (surgery, internal diseases, and maternity).

  6. A simulation study of finite-sample properties of marginal structural Cox proportional hazards models.

    Science.gov (United States)

    Westreich, Daniel; Cole, Stephen R; Schisterman, Enrique F; Platt, Robert W

    2012-08-30

    Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.

  7. Integrated Modeling for Flood Hazard Mapping Using Watershed Modeling System

    National Research Council Canada - National Science Library

    Seyedeh S. Sadrolashrafi; Thamer A. Mohamed; Ahmad R.B. Mahmud; Majid K. Kholghi; Amir Samadi

    2008-01-01

    ...) with the Watershed Modeling System (WMS) for flood modeling is developed. It also interconnects the terrain models and the GIS software, with commercial standard hydrological and hydraulic models, including HEC-1, HEC-RAS, etc...

  8. Uncertainties in modeling hazardous gas releases for emergency response

    Directory of Open Access Journals (Sweden)

    Kathrin Baumann-Stanzer

    2011-02-01

    Full Text Available In case of an accidental release of toxic gases the emergency responders need fast information about the affected area and the maximum impact. Hazard distances calculated with the models MET, ALOHA, BREEZE, TRACE and SAMS for scenarios with chlorine, ammoniac and butane releases are compared in this study. The variations of the model results are measures for uncertainties in source estimation and dispersion calculation. Model runs for different wind speeds, atmospheric stability and roughness lengths indicate the model sensitivity to these input parameters. In-situ measurements at two urban near-traffic sites are compared to results of the Integrated Nowcasting through Comprehensive Analysis (INCA in order to quantify uncertainties in the meteorological input. The hazard zone estimates from the models vary up to a factor of 4 due to different input requirements as well as due to different internal model assumptions. None of the models is found to be 'more conservative' than the others in all scenarios. INCA wind-speeds are correlated to in-situ observations at two urban sites in Vienna with a factor of 0.89. The standard deviations of the normal error distribution are 0.8 ms-1 in wind speed, on the scale of 50 degrees in wind direction, up to 4°C in air temperature and up to 10 % in relative humidity. The observed air temperature and humidity are well reproduced by INCA with correlation coefficients of 0.96 to 0.99. INCA is therefore found to give a good representation of the local meteorological conditions. Besides of real-time data, the INCA-short range forecast for the following hours may support the action planning of the first responders.

  9. Uncertainty in natural hazards, modeling and decision support: An introduction to this volume [Chapter 1

    Science.gov (United States)

    Karin Riley; Matthew Thompson; Peter Webley; Kevin D. Hyde

    2017-01-01

    Modeling has been used to characterize and map natural hazards and hazard susceptibility for decades. Uncertainties are pervasive in natural hazards analysis, including a limited ability to predict where and when extreme events will occur, with what consequences, and driven by what contributing factors. Modeling efforts are challenged by the intrinsic...

  10. Aspect ratio plays a role in the hazard potential of CeO2 nanoparticles in mouse lung and zebrafish gastrointestinal tract.

    Science.gov (United States)

    Lin, Sijie; Wang, Xiang; Ji, Zhaoxia; Chang, Chong Hyun; Dong, Yuan; Meng, Huan; Liao, Yu-Pei; Wang, Meiying; Song, Tze-Bin; Kohan, Sirus; Xia, Tian; Zink, Jeffrey I; Lin, Shuo; Nel, André E

    2014-05-27

    We have previously demonstrated that there is a relationship between the aspect ratio (AR) of CeO2 nanoparticles and in vitro hazard potential. CeO2 nanorods with AR ≥ 22 induced lysosomal damage and progressive effects on IL-1β production and cytotoxicity in the human myeloid cell line, THP-1. In order to determine whether this toxicological paradigm for long aspect ratio (LAR) CeO2 is also relevant in vivo, we performed comparative studies in the mouse lung and gastrointestinal tract (GIT) of zebrafish larvae. Although oropharyngeal aspiration could induce acute lung inflammation for CeO2 nanospheres and nanorods, only the nanorods with the highest AR (C5) induced significant IL-1β and TGF-β1 production in the bronchoalveolar lavage fluid at 21 days but did not induce pulmonary fibrosis. However, after a longer duration (44 days) exposure to 4 mg/kg of the C5 nanorods, more collagen production was seen with CeO2 nanorods vs nanospheres after correcting for Ce lung burden. Using an oral-exposure model in zebrafish larvae, we demonstrated that C5 nanorods also induced significant growth inhibition, a decrease in body weight, and delayed vertebral calcification. In contrast, CeO2 nanospheres and shorter nanorods had no effect. Histological and transmission electron microscopy analyses showed that the key injury mechanism of C5 was in the epithelial lining of the GIT, which demonstrated blunted microvilli and compromised digestive function. All considered, these data demonstrate that, similar to cellular studies, LAR CeO2 nanorods exhibit more toxicity in the lung and GIT, which could be relevant to inhalation and environmental hazard potential.

  11. Jackknifed random weighting for Cox proportional hazards model

    Institute of Scientific and Technical Information of China (English)

    LI Xiao; WU YaoHua; TU DongSheng

    2012-01-01

    The Cox proportional hazards model is the most used statistical model in the analysis of survival time data.Recently,a random weighting method was proposed to approximate the distribution of the maximum partial likelihood estimate for the regression coefficient in the Cox model.This method was shown not as sensitive to heavy censoring as the bootstrap method in simulation studies but it may not be second-order accurate as was shown for the bootstrap approximation.In this paper,we propose an alternative random weighting method based on one-step linear jackknife pseudo values and prove the second accuracy of the proposed method.Monte Carlo simulations are also performed to evaluate the proposed method for fixed sample sizes.

  12. Regularization for Cox's Proportional Hazards Model With NP-Dimensionality

    CERN Document Server

    Bradic, Jelena; Jiang, Jiancheng

    2010-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for {\\it non-polynomial} (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically ...

  13. Conveying Lava Flow Hazards Through Interactive Computer Models

    Science.gov (United States)

    Thomas, D.; Edwards, H. K.; Harnish, E. P.

    2007-12-01

    As part of an Information Sciences senior class project, a software package of an interactive version of the FLOWGO model was developed for the Island of Hawaii. The software is intended for use in an ongoing public outreach and hazards awareness program that educates the public about lava flow hazards on the island. The design parameters for the model allow an unsophisticated user to initiate a lava flow anywhere on the island and allow it to flow down-slope to the shoreline while displaying a timer to show the rate of advance of the flow. The user is also able to modify a range of input parameters including eruption rate, the temperature of the lava at the vent, and crystal fraction present in the lava at the source. The flow trajectories are computed using a 30 m digital elevation model for the island and the rate of advance of the flow is estimated using the average slope angle and the computed viscosity of the lava as it cools in either a channel (high heat loss) or lava tube (low heat loss). Even though the FLOWGO model is not intended to, and cannot, accurately predict the rate of advance of a tube- fed or channel-fed flow, the relative rates of flow advance for steep or flat-lying terrain convey critically important hazard information to the public: communities located on the steeply sloping western flanks of Mauna Loa may have no more than a few hours to evacuate in the face of a threatened flow from Mauna Loa's southwest rift whereas communities on the more gently sloping eastern flanks of Mauna Loa and Kilauea may have weeks to months to prepare for evacuation. Further, the model also can show the effects of loss of critical infrastructure with consequent impacts on access into and out of communities, loss of electrical supply, and communications as a result of lava flow implacement. The interactive model has been well received in an outreach setting and typically generates greater involvement by the participants than has been the case with static maps

  14. Hazard based models for freeway traffic incident duration.

    Science.gov (United States)

    Tavassoli Hojati, Ahmad; Ferreira, Luis; Washington, Simon; Charles, Phil

    2013-03-01

    Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.

  15. VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation

    Science.gov (United States)

    Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.

    2009-12-01

    Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the

  16. Advancements in the global modelling of coastal flood hazard

    Science.gov (United States)

    Muis, Sanne; Verlaan, Martin; Nicholls, Robert J.; Brown, Sally; Hinkel, Jochen; Lincke, Daniel; Vafeidis, Athanasios T.; Scussolini, Paolo; Winsemius, Hessel C.; Ward, Philip J.

    2017-04-01

    Storm surges and high tides can cause catastrophic floods. Due to climate change and socio-economic development the potential impacts of coastal floods are increasing globally. Global modelling of coastal flood hazard provides an important perspective to quantify and effectively manage this challenge. In this contribution we show two recent advancements in global modelling of coastal flood hazard: 1) a new improved global dataset of extreme sea levels, and 2) an improved vertical datum for extreme sea levels. Both developments have important implications for estimates of exposure and inundation modelling. For over a decade, the only global dataset of extreme sea levels was the DINAS-COAST Extreme Sea Levels (DCESL), which uses a static approximation to estimate total water levels for different return periods. Recent advances have enabled the development of a new dynamically derived dataset: the Global Tide and Surge Reanalysis (GTSR) dataset. Here we present a comparison of the DCESL and GTSR extreme sea levels and the resulting global flood exposure for present-day conditions. While DCESL generally overestimates extremes, GTSR underestimates extremes, particularly in the tropics. This results in differences in estimates of flood exposure. When using the 1 in 100-year GTSR extremes, the exposed global population is 28% lower than when using the 1 in 100-year DCESL extremes. Previous studies at continental to global-scales have not accounted for the fact that GTSR and DCESL are referenced to mean sea level, whereas global elevation datasets, such as SRTM, are referenced to the EGM96 geoid. We propose a methodology to correct for the difference in vertical datum and demonstrate that this also has a large effect on exposure. For GTSR, the vertical datum correction results in a 60% increase in global exposure.

  17. Assessment and indirect adjustment for confounding by smoking in cohort studies using relative hazards models.

    Science.gov (United States)

    Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R

    2014-11-01

    Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.

  18. Preliminary deformation model for National Seismic Hazard map of Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)

    2015-04-24

    Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.

  19. Optimization of maintenance policy using the proportional hazard model

    Energy Technology Data Exchange (ETDEWEB)

    Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)

    2009-01-15

    The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.

  20. POSSIBILISTIC SHARPE RATIO BASED NOVICE PORTFOLIO SELECTION MODELS

    Directory of Open Access Journals (Sweden)

    Rupak Bhattacharyya

    2013-02-01

    Full Text Available This paper uses the concept of possibilistic risk aversion to propose a new approach for portfolio selection in fuzzy environment. Using possibility theory, the possibilistic mean, variance, standard deviation and risk premium of a fuzzy number are established. Possibilistic Sharpe ratio is defined as the ratio of possibilistic risk premium and possibilistic standard deviation of a portfolio. The Sharpe ratio is a measure of the performance of the portfolio compared to the risk taken. The higher the Sharpe ratio, the better the performance of the portfolio is and the greater the profits of taking risk. New models of fuzzy portfolio selection considering the possibilistic Sharpe ratio, return and skewness of the portfolio are considered. The feasibility and effectiveness of the proposed method is illustrated by numerical example extracted from Bombay Stock Exchange (BSE, India and is solved by multiple objective genetic algorithm (MOGA.

  1. Hydraulic modeling for lahar hazards at cascades volcanoes

    Science.gov (United States)

    Costa, J.E.

    1997-01-01

    The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.

  2. Opinion:the use of natural hazard modeling for decision making under uncertainty

    Institute of Scientific and Technical Information of China (English)

    David E Calkin; Mike Mentis

    2015-01-01

    Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex environmental models. However, to our knowledge there has been less focus on the conditions where decision makers can confidently rely on results from these models. In this review we propose a preliminary set of conditions necessary for the appropriate application of modeled results to natural hazard decision making and provide relevant examples within US wildfire management programs.

  3. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  4. Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank

    2008-11-01

    Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

  5. Regional Integrated Meteorological Forecasting and Warning Model for Geological Hazards Based on Logistic Regression

    Institute of Scientific and Technical Information of China (English)

    XU Jing; YANG Chi; ZHANG Guoping

    2007-01-01

    Information model is adopted to integrate factors of various geosciences to estimate the susceptibility of geological hazards. Further combining the dynamic rainfall observations, Logistic regression is used for modeling the probabilities of geological hazard occurrences, upon which hierarchical warnings for rainfall-induced geological hazards are produced. The forecasting and warning model takes numerical precipitation forecasts on grid points as its dynamic input, forecasts the probabilities of geological hazard occurrences on the same grid, and translates the results into likelihoods in the form of a 5-level hierarchy. Validation of the model with observational data for the year 2004 shows that 80% of the geological hazards of the year have been identified as "likely enough to release warning messages". The model can satisfy the requirements of an operational warning system, thus is an effective way to improve the meteorological warnings for geological hazards.

  6. 3D Property Modeling of Void Ratio by Cokriging

    Institute of Scientific and Technical Information of China (English)

    Yao Lingqing; Pan Mao; Cheng Qiuming

    2008-01-01

    Void ratio measures compactness of ground soil in geotechnical engineering. When samples are collected in certain area for mapping void ratios, other relevant types of properties such as water content may be also analyzed. To map the spatial distribution of void ratio in the area based on these types of point, observation data interpolation is often needed. Owing to the variance of sampling density along the horizontal and vertical directions, special consideration is required to handle anisotropy of estimator. 3D property modeling aims at predicting the overall distribution of property values from limited samples, and geostatistical method can he employed naturally here because they help to minimize the mean square error of estimation. To construct 3D property model of void ratio, cokriging was used considering its mutual correlation with water content, which is another important soil parameter. Moreover, K-D tree was adopted to organize the samples to accelerate neighbor query in 3D space during the above modeling process. At last, spatial configuration of void ratio distribution in an engineering body was modeled through 3D visualization, which provides important information for civil engineering purpose.

  7. Research collaboration, hazard modeling and dissemination in volcanology with Vhub

    Science.gov (United States)

    Palma Lizana, J. L.; Valentine, G. A.

    2011-12-01

    Vhub (online at vhub.org) is a cyberinfrastructure for collaboration in volcanology research, education, and outreach. One of the core objectives of this project is to accelerate the transfer of research tools to organizations and stakeholders charged with volcano hazard and risk mitigation (such as observatories). Vhub offers a clearinghouse for computational models of volcanic processes and data analysis, documentation of those models, and capabilities for online collaborative groups focused on issues such as code development, configuration management, benchmarking, and validation. A subset of simulations is already available for online execution, eliminating the need to download and compile locally. In addition, Vhub is a platform for sharing presentations and other educational material in a variety of media formats, which are useful in teaching university-level volcanology. VHub also has wikis, blogs and group functions around specific topics to encourage collaboration and discussion. In this presentation we provide examples of the vhub capabilities, including: (1) tephra dispersion and block-and-ash flow models; (2) shared educational materials; (3) online collaborative environment for different types of research, including field-based studies and plume dispersal modeling; (4) workshops. Future goals include implementation of middleware to allow access to data and databases that are stored and maintained at various institutions around the world. All of these capabilities can be exercised with a user-defined level of privacy, ranging from completely private (only shared and visible to specified people) to completely public. The volcanological community is encouraged to use the resources of vhub and also to contribute models, datasets, and other items that authors would like to disseminate. The project is funded by the US National Science Foundation and includes a core development team at University at Buffalo, Michigan Technological University, and University

  8. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  9. Stability of earthquake clustering models: Criticality and branching ratios

    Science.gov (United States)

    Zhuang, Jiancang; Werner, Maximilian J.; Harte, David S.

    2013-12-01

    We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.

  10. Stability of earthquake clustering models: criticality and branching ratios.

    Science.gov (United States)

    Zhuang, Jiancang; Werner, Maximilian J; Harte, David S

    2013-12-01

    We study the stability conditions of a class of branching processes prominent in the analysis and modeling of seismicity. This class includes the epidemic-type aftershock sequence (ETAS) model as a special case, but more generally comprises models in which the magnitude distribution of direct offspring depends on the magnitude of the progenitor, such as the branching aftershock sequence (BASS) model and another recently proposed branching model based on a dynamic scaling hypothesis. These stability conditions are closely related to the concepts of the criticality parameter and the branching ratio. The criticality parameter summarizes the asymptotic behavior of the population after sufficiently many generations, determined by the maximum eigenvalue of the transition equations. The branching ratio is defined by the proportion of triggered events in all the events. Based on the results for the generalized case, we show that the branching ratio of the ETAS model is identical to its criticality parameter because its magnitude density is separable from the full intensity. More generally, however, these two values differ and thus place separate conditions on model stability. As an illustration of the difference and of the importance of the stability conditions, we employ a version of the BASS model, reformulated to ensure the possibility of stationarity. In addition, we analyze the magnitude distributions of successive generations of the BASS model via analytical and numerical methods, and find that the compound density differs substantially from a Gutenberg-Richter distribution, unless the process is essentially subcritical (branching ratio less than 1) or the magnitude dependence between the parent event and the direct offspring is weak.

  11. Particle multiplicities and particle ratios in excluded volume model

    CERN Document Server

    Mishra, M

    2008-01-01

    One of the most surprising results is to find that a consistent description of all the experimental results on particle multiplicities and particle ratios obtained from the lowest AGS to the highest RHIC energies is possible within the framework of a thermal statistical model. We propose here a thermodynamically consistent excluded-volume model involving an interacting multi-component hadron gas. We find that the energy dependence of the total multiplicities of strange and non-strange hadrons obtained in this model agrees closely with the experimental results. It indicates that the freeze out volume of the fireball is uniformly the same for all the particles. We have also compared the variation of the particle ratios such as $/, /, K^{-}/K^{+}, \\bar{p}/p, \\bar{\\Lambda}/\\Lambda, \\bar{\\Xi}/\\Xi, \\bar{\\Omega}/\\Omega, /, /, /$ and $/$ with respect to the center-of-mass energy as predicted by our model with the recent experimental data.

  12. On Model Specification and Selection of the Cox Proportional Hazards Model*

    OpenAIRE

    Lin, Chen-Yen; Halabi, Susan

    2013-01-01

    Prognosis plays a pivotal role in patient management and trial design. A useful prognostic model should correctly identify important risk factors and estimate their effects. In this article, we discuss several challenges in selecting prognostic factors and estimating their effects using the Cox proportional hazards model. Although a flexible semiparametric form, the Cox’s model is not entirely exempt from model misspecification. To minimize possible misspecification, instead of imposing tradi...

  13. Simulation Modeling and Analysis of Operator-Machine Ratio

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on a simulation model of a semiconductor manufacturer, operator-machine ratio (OMR) analysis is made using work study and time study. Through sensitivity analysis, it is found that labor utilization decreases with the increase of lot size.Meanwhile, it is able to identify that the OMR for this company should be improved from 1∶3 to 1∶5. An application result shows that the proposed model can effectively improve the OMR by 33%.

  14. Hidden Markov models for estimating animal mortality from anthropogenic hazards

    Science.gov (United States)

    Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...

  15. Modelling Inland Flood Events for Hazard Maps in Taiwan

    Science.gov (United States)

    Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.

    2015-12-01

    Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage

  16. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  17. Conceptual geoinformation model of natural hazards risk assessment

    Science.gov (United States)

    Kulygin, Valerii

    2016-04-01

    Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.

  18. Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation

    Science.gov (United States)

    Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.

    2006-12-01

    An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes

  19. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    OpenAIRE

    J. Blahut; P. Horton; S. Sterlacchini; Jaboyedoff, M.

    2010-01-01

    Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of th...

  20. Modelling the costs of natural hazards in games

    Science.gov (United States)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban

  1. Expert elicitation for a national-level volcano hazard model

    Science.gov (United States)

    Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill

    2016-04-01

    The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.

  2. The influence of mapped hazards on risk beliefs: a proximity-based modeling approach.

    Science.gov (United States)

    Severtson, Dolores J; Burt, James E

    2012-02-01

    Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated "you live here" location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, for example, distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples.

  3. DEVELOPMENT OF A CRASH RISK PROBABILITY MODEL FOR FREEWAYS BASED ON HAZARD PREDICTION INDEX

    Directory of Open Access Journals (Sweden)

    Md. Mahmud Hasan

    2014-12-01

    Full Text Available This study presents a method for the identification of hazardous situations on the freeways. The hazard identification is done using a crash risk probability model. For this study, about 18 km long section of Eastern Freeway in Melbourne (Australia is selected as a test bed. Two categories of data i.e. traffic and accident record data are used for the analysis and modelling. In developing the crash risk probability model, Hazard Prediction Index is formulated in this study by the differences of traffic parameters with threshold values. Seven different prediction indices are examined and the best one is selected as crash risk probability model based on prediction error minimisation.

  4. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  5. Delayed geochemical hazard: Concept, digital model and case study

    Institute of Scientific and Technical Information of China (English)

    CHEN Ming; FENG Liu; Jacques Yvon

    2005-01-01

    Delayed Geochemical Hazard (DGH briefly) presents the whole process of a kind of serious ecological and environmental hazard caused by sudden reactivation and sharp release of long-term accumulated pollutant from stable species to active ones in soil or sediment system due to the change of physical-chemical conditions (such as temperature, pH, Eh, moisture, the concentrations of organic matters, etc.) or the decrease of environment capacity. The characteristics of DGH are discussed. The process of a typical DGH can be expressed as a nonlinear polynomial. The points where the derivative functions of the first and second orders of the polynomial reach zero, minimum and maximum are keys for risk assessment and harzard pridication.The process and mechanism of the hazard is due to the transform of pollutant among different species principally. The concepts of "total releasable content of pollutant", TRCP, and "total concentration of active specie", TCAS, are necessarily defined to describe the mechanism of DGH. The possibility of the temporal and spatial propagation is discussed. Case study shows that there exists a transform mechanism of "gradual release" and "chain reaction" among the species of the exchangeable and the bounds to carbonate, iron and manganese oxides and organic matter, thus causing the delayed geochemical hazard.

  6. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  7. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  8. An estimating equation for parametric shared frailty models with marginal additive hazards

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Martinussen, Torben

    2004-01-01

    Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...... to use the marginal additive hazards model. We derive asymptotic properties of the Lin and Ying estimators for the marginal additive hazards model for multivariate failure time data. Furthermore we suggest estimating equations for the regression parameters and association parameters in parametric shared...

  9. Fluctuation dissipation ratio in the one dimensional kinetic Ising model

    OpenAIRE

    Lippiello, E.; Zannetti, M.

    2000-01-01

    The exact relation between the response function $R(t,t^{\\prime})$ and the two time correlation function $C(t,t^{\\prime})$ is derived analytically in the one dimensional kinetic Ising model subjected to a temperature quench. The fluctuation dissipation ratio $X(t,t^{\\prime})$ is found to depend on time through $C(t,t^{\\prime})$ in the time region where scaling $C(t,t^{\\prime}) = f(t/t^{\\prime})$ holds. The crossover from the nontrivial form $X(C(t,t^{\\prime}))$ to $X(t,t^{\\prime}) \\equiv 1$ t...

  10. Opinion: the use of natural hazard modeling for decision making under uncertainty

    Directory of Open Access Journals (Sweden)

    David E Calkin

    2015-04-01

    Full Text Available Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex environmental models. However, to our knowledge there has been less focus on the conditions where decision makers can confidently rely on results from these models. In this review we propose a preliminary set of conditions necessary for the appropriate application of modeled results to natural hazard decision making and provide relevant examples within US wildfire management programs.

  11. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  12. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    Directory of Open Access Journals (Sweden)

    J. Blahut

    2010-11-01

    Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise

  13. Model for Estimation Urban Transportation Supply-Demand Ratio

    Directory of Open Access Journals (Sweden)

    Chaoqun Wu

    2015-01-01

    Full Text Available The paper establishes an estimation model of urban transportation supply-demand ratio (TSDR to quantitatively describe the conditions of an urban transport system and to support a theoretical basis for transport policy-making. This TSDR estimation model is supported by the system dynamic principle and the VENSIM (an application that simulates the real system. It was accomplished by long-term observation of eight cities’ transport conditions and by analyzing the estimated results of TSDR from fifteen sets of refined data. The estimated results indicate that an urban TSDR can be classified into four grades representing four transport conditions: “scarce supply,” “short supply,” “supply-demand balance,” and “excess supply.” These results imply that transport policies or measures can be quantified to facilitate the process of ordering and screening them.

  14. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  15. a study of the slope of cox proportional hazard and weibull models

    African Journals Online (AJOL)

    Adejumo & Ahmadu

    the values of the unknown parameters. These include the ... semi parametric cox proportional hazard model when the parametric ... simulated and the real life data approach. ... greatest risk progression of TB infection to active disease. People.

  16. Investigation of the Effect of Traffic Parameters on Road Hazard Using Classification Tree Model

    Directory of Open Access Journals (Sweden)

    Md. Mahmud Hasan

    2012-09-01

    Full Text Available This paper presents a method for the identification of hazardous situations on the freeways. For this study, about 18 km long section of Eastern Freeway in Melbourne, Australia was selected as a test bed. Three categories of data i.e. traffic, weather and accident record data were used for the analysis and modelling. In developing the crash risk probability model, classification tree based model was developed in this study. In formulating the models, it was found that weather conditions did not have significant impact on accident occurrence so the classification tree was built using two traffic indices; traffic flow and vehicle speed only. The formulated classification tree is able to identify the possible hazard and non-hazard situations on freeway. The outcome of the study will aid the hazard mitigation strategies.

  17. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Directory of Open Access Journals (Sweden)

    Mark Stirling

    2017-06-01

    Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  18. Deriving global flood hazard maps of fluvial floods through a physical model cascade

    OpenAIRE

    Pappenberger, F.; E. Dutra; Wetterhall, F.; Cloke, H

    2012-01-01

    Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteoro...

  19. Use of the p,p'-DDD: p,p'-DDE concentration ratio to trace contaminant migration from a hazardous waste site.

    Science.gov (United States)

    Pinkney, Alfred E; McGowan, Peter C

    2006-09-01

    For approximately 50 years, beginning in the 1920s, hazardous wastes were disposed in an 11-hectare area of the Marine Corps Base (MCB) Quantico, Virginia, USA known as the Old Landfill. Polychlorinated biphenyls (PCBs) and DDT compounds were the primary contaminants of concern. These contaminants migrated into the sediments of a 78-hectare area of the Potomac River, the Quantico Embayment. Fish tissue contamination resulted in the MCB posting signs along the embayment shoreline warning fishermen to avoid consumption. In this paper, we interpret total PCB (t-PCBs) and total DDT (t-DDT, sum of six DDT, DDD, and DDE isomers) data from monitoring studies. We use the ratio of p,p'-DDD to p,p'-DDE concentrations as a tracer to distinguish site-related from regional contamination. The median DDD/DDE ratio in Quantico Embayment sediments (3.5) was significantly higher than the median ratio (0.71) in sediments from nearby Powells Creek, used as a reference area. In general, t-PCBs and t-DDT concentrations were significantly higher in killifish (Fundulus diaphanus) and carp (Cyprinus carpio) from the Quantico Embayment compared with Powells Creek. For both species, Quantico Embayment fish had mean or median DDD/DDE ratios greater than one. Median ratios were significantly higher in Quantico Embayment (4.6) than Powells Creek (0.28) whole body carp. In contrast, t-PCBs and t-DDT in channel catfish (Ictalurus punctatus) fillets were similar in Quantico Embayment and Powells Creek collections, with median ratios of 0.34 and 0.26, respectively. Differences between species may be attributable to movement (carp and killifish being more localized) and feeding patterns (carp ingesting sediment while feeding). We recommend that environmental scientists use this ratio when investigating sites with DDT contamination.

  20. Accelerated propor tional degradation hazards-odds model in accelerated degradation test

    Institute of Scientific and Technical Information of China (English)

    Tingting Huang; Zhizhong Li

    2015-01-01

    An accelerated proportional degradation hazards-odds model is proposed. It is a non-parametric model and thus has path-free and distribution-free properties, avoiding the errors caused by faulty assumptions of degradation paths or distribution of degra-dation measurements. It is established based on a link function which combines the degradation cumulative hazard rate function and the degradation odds function through a transformation pa-rameter, and this makes the accelerated proportional degradation hazards model and the accelerated proportional degradation odds model special cases of it. Hypothesis tests are discussed, and the proposed model is applicable when some model assumptions are satisfied. This model is utilized to estimate the reliability of minia-ture bulbs under low stress levels based on the degradation data obtained under high stress levels to validate the effectiveness of this model.

  1. Modeling of finite aspect ratio effects on current drive

    Energy Technology Data Exchange (ETDEWEB)

    Wright, J.C.; Phillips, C.K. [Princeton Plasma Physics Lab., NJ (United States)

    1996-12-31

    Most 2D RF modeling codes use a parameterization of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by a full wave code. This eliminates the need to use the approximation inherent in the parameterization. Current profiles are then calculated using the adjoint formulation. This approach has been implemented in the FISIC code. The accuracy of the parameterization of the current drive efficiency, {eta}, is judged by a comparison with a direct calculation: where {chi} is the adjoint function, {epsilon} is the kinetic energy, and {rvec {Gamma}} is the quasilinear flux. It is shown that for large aspect ratio devices ({epsilon} {r_arrow} 0), the parameterization is nearly identical to the direct calculation. As the aspect ratio approaches unity, visible differences between the two calculations appear.

  2. A fast, calibrated model for pyroclastic density currents kinematics and hazard

    Science.gov (United States)

    Esposti Ongaro, Tomaso; Orsucci, Simone; Cornolti, Fulvio

    2016-11-01

    Multiphase flow models represent valuable tools for the study of the complex, non-equilibrium dynamics of pyroclastic density currents. Particle sedimentation, flow stratification and rheological changes, depending on the flow regime, interaction with topographic obstacles, turbulent air entrainment, buoyancy reversal, and other complex features of pyroclastic currents can be simulated in two and three dimensions, by exploiting efficient numerical solvers and the improved computational capability of modern supercomputers. However, numerical simulations of polydisperse gas-particle mixtures are quite computationally expensive, so that their use in hazard assessment studies (where there is the need of evaluating the probability of hazardous actions over hundreds of possible scenarios) is still challenging. To this aim, a simplified integral (box) model can be used, under the appropriate hypotheses, to describe the kinematics of pyroclastic density currents over a flat topography, their scaling properties and their depositional features. In this work, multiphase flow simulations are used to evaluate integral model approximations, to calibrate its free parameters and to assess the influence of the input data on the results. Two-dimensional numerical simulations describe the generation and decoupling of a dense, basal layer (formed by progressive particle sedimentation) from the dilute transport system. In the Boussinesq regime (i.e., for solid mass fractions below about 0.1), the current Froude number (i.e., the ratio between the current inertia and buoyancy) does not strongly depend on initial conditions and it is consistent to that measured in laboratory experiments (i.e., between 1.05 and 1.2). For higher density ratios (solid mass fraction in the range 0.1-0.9) but still in a relatively dilute regime (particle volume fraction lower than 0.01), numerical simulations demonstrate that the box model is still applicable, but the Froude number depends on the reduced

  3. Application of a Cloud Model-Set Pair Analysis in Hazard Assessment for Biomass Gasification Stations

    Science.gov (United States)

    Yan, Fang; Xu, Kaili

    2017-01-01

    Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific. PMID:28076440

  4. Application of a Cloud Model-Set Pair Analysis in Hazard Assessment for Biomass Gasification Stations.

    Science.gov (United States)

    Yan, Fang; Xu, Kaili

    2017-01-01

    Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific.

  5. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.

    2017-07-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.

  6. A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards

    Science.gov (United States)

    Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.

    Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show

  7. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  8. Snakes as hazards: modelling risk by chasing chimpanzees.

    Science.gov (United States)

    McGrew, William C

    2015-04-01

    Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.

  9. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  10. Isotopic Ratios in Titan's Methane: Measurements and Modeling

    Science.gov (United States)

    Nixon, C. A.; Temelso, B.; Vinatier, S.; Teanby, N. A.; Bezard, B.; Achterberg, R. K.; Mandt, K. E.; Sherrill, C. D.; Irwin, P. G.; Jennings, D. E.; Romani, P. N.; Coustenis, A.; Flasar, F. M.

    2012-01-01

    The existence of methane in Titan's atmosphere (approx. 6% level at the surface) presents a unique enigma, as photochemical models predict that the current inventory will be entirely depleted by photochemistry in a timescale of approx 20 Myr. In this paper, we examine the clues available from isotopic ratios (C-12/C-13 and D/H) in Titan's methane as to the past atmosphere history of this species. We first analyze recent infrared spectra of CH4 collected by the Cassini Composite Infrared Spectrometer, measuring simultaneously for the first time the abundances of all three detected minor isotopologues: (13)CH4, (12)CH3D, and (13)CH3D. From these we compute estimates of C-12/C-13 = 86.5 +/- 8.2 and D/H = (1.59 +/- 0.33) x 10(exp -4) , in agreement with recent results from the Huygens GCMS and Cassini INMS instruments. We also use the transition state theory to estimate the fractionation that occurs in carbon and hydrogen during a critical reaction that plays a key role in the chemical depletion of Titan's methane: CH4 + C2H yields CH3 + C2H2. Using these new measurements and predictions we proceed to model the time evolution of C-12/C-13 and D/H in Titan's methane under several prototypical replenishment scenarios. In our Model 1 (no resupply of CH4), we find that the present-day C-12/C-13 implies that the CH4 entered the atmosphere 60-1600 Myr ago if methane is depleted by chemistry and photolysis alone, but much more recently-most likely less than 10 Myr ago-if hydrodynamic escape is also occurring. On the other hand, if methane has been continuously supplied at the replenishment rate then the isotopic ratios provide no constraints, and likewise for the case where atmospheric methane is increasing, We conclude by discussing how these findings may be combined with other evidence to constrain the overall history of the atmospheric methane.

  11. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  12. Expansion of Collisional Radiative Model for Helium line ratio spectroscopy

    Science.gov (United States)

    Cinquegrani, David; Cooper, Chris; Forest, Cary; Milhone, Jason; Munoz-Borges, Jorge; Schmitz, Oliver; Unterberg, Ezekial

    2015-11-01

    Helium line ratio spectroscopy is a powerful technique of active plasma edge spectroscopy. It enables reconstruction of plasma edge parameters like electron density and temperature by use of suitable Collisional Radiative Models (CRM). An established approach is successful at moderate plasma densities (~1018m-3 range) and temperature (30-300eV), taking recombination and charge exchange to be negligible. The goal of this work is to experimentally explore limitations of this approach to CRM. For basic validation the Madison Plasma Dynamo eXperiment (MPDX) will be used. MPDX offers a very uniform plasma and spherical symmetry at low temperature (5-20 eV) and low density (1016 -1017m-3) . Initial data from MPDX shows a deviation in CRM results when compared to Langmuir probe data. This discrepancy points to the importance of recombination effects. The validated model is applied to first time measurement of electron density and temperature in front of an ICRH antenna at the TEXTOR tokamak. These measurements are important to understand RF coupling and PMI physics at the antenna limiters. Work supported in part by start up funds of the Department of Engineering Physics at the UW - Madison, USA and NSF CAREER award PHY-1455210.

  13. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  14. Flexible parametric modelling of cause-specific hazards to estimate cumulative incidence functions

    Science.gov (United States)

    2013-01-01

    Background Competing risks are a common occurrence in survival analysis. They arise when a patient is at risk of more than one mutually exclusive event, such as death from different causes, and the occurrence of one of these may prevent any other event from ever happening. Methods There are two main approaches to modelling competing risks: the first is to model the cause-specific hazards and transform these to the cumulative incidence function; the second is to model directly on a transformation of the cumulative incidence function. We focus on the first approach in this paper. This paper advocates the use of the flexible parametric survival model in this competing risk framework. Results An illustrative example on the survival of breast cancer patients has shown that the flexible parametric proportional hazards model has almost perfect agreement with the Cox proportional hazards model. However, the large epidemiological data set used here shows clear evidence of non-proportional hazards. The flexible parametric model is able to adequately account for these through the incorporation of time-dependent effects. Conclusion A key advantage of using this approach is that smooth estimates of both the cause-specific hazard rates and the cumulative incidence functions can be obtained. It is also relatively easy to incorporate time-dependent effects which are commonly seen in epidemiological studies. PMID:23384310

  15. Investigating applicability of Haeri-Samiee landslide hazard zonation model in Moalemkalayeh watershed, Iran

    Science.gov (United States)

    Beheshtirad, M.; Noormandipour, N.

    2009-04-01

    Identification of regions having potential for landslide occurrence is one of the basic measures in natural resources management which decreases damages causes by these phenomena. For this purpose different landslide hazard zonation models were proposed based on the environmental conditions and goals. In this research applicability of Haeri-samiee Landslide Hazard Zonations Model has been investigated in Moalemkalayeh watershed. For doing this, existing landslides identified and their inventory map was prepared as earthly evidence. Topographical map (1:50000) was divided into 514 cellular network as working unit. Landslide hazard zonation map provided based on H.S. model. We investigated the level of similarity potential hazard classes and figures of the two models with earthly evidence (landslide inventory map) in the SPSS and Minitab environments. Our results showed that there is a significant correlation at the 0.01 level between potential hazard classes and figures with the number of landslides, area of landslide, as well as the multiplication of the number and area of landslides in the H.S. model. Therefore H.S. model is the suitable model for Moalemkalayeh watershed.

  16. A model standardized risk assessment protocol for use with hazardous waste sites.

    OpenAIRE

    Marsh, G M; Day, R.

    1991-01-01

    This paper presents a model standardized risk assessment protocol (SRAP) for use with hazardous waste sites. The proposed SRAP focuses on the degree and patterns of evidence that exist for a significant risk to human populations from exposure to a hazardous waste site. The SRAP was designed with at least four specific goals in mind: to organize the available scientific data on a specific site and to highlight important gaps in this knowledge; to facilitate rational, cost-effective decision ma...

  17. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    Science.gov (United States)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and

  18. Technology Learning Ratios in Global Energy Models; Ratios de Aprendizaje Tecnologico en Modelos Energeticos Globales

    Energy Technology Data Exchange (ETDEWEB)

    Varela, M.

    2001-07-01

    The process of introduction of a new technology supposes that while its production and utilisation increases, also its operation improves and its investment costs and production decreases. The accumulation of experience and learning of a new technology increase in parallel with the increase of its market share. This process is represented by the technological learning curves and the energy sector is not detached from this process of substitution of old technologies by new ones. The present paper carries out a brief revision of the main energy models that include the technology dynamics (learning). The energy scenarios, developed by global energy models, assume that the characteristics of the technologies are variables with time. But this tend is incorporated in a exogenous way in these energy models, that is to say, it is only a time function. This practice is applied to the cost indicators of the technology such as the specific investment costs or to the efficiency of the energy technologies. In the last years, the new concept of endogenous technological learning has been integrated within these global energy models. This paper examines the concept of technological learning in global energy models. It also analyses the technological dynamics of the energy systems including the endogenous modelling of the process of technological progress. Finally, it makes a comparison of several of the most used global energy models (MARKAL, MESSAGE and ERIS) and, more concretely, about the use these models make of the concept of technological learning. (Author) 17 refs.

  19. Global Volcano Model: progress towards an international co-ordinated network for volcanic hazard and risk

    Science.gov (United States)

    Loughlin, Susan

    2013-04-01

    GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.

  20. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    Science.gov (United States)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  1. POSSIBILISTIC SHARPE RATIO BASED NOVICE PORTFOLIO SELECTION MODELS

    OpenAIRE

    Rupak Bhattacharyya

    2013-01-01

    This paper uses the concept of possibilistic risk aversion to propose a new approach for portfolio selection in fuzzy environment. Using possibility theory, the possibilistic mean, variance, standard deviation and risk premium of a fuzzy number are established. Possibilistic Sharpe ratio is defined as the ratio of possibilistic risk premium and possibilistic standard deviation of a portfolio. The Sharpe ratio is a measure of the performance of the portfolio compared to the risk...

  2. The Capra Research Program for Modelling Extreme Mass Ratio Inspirals

    CERN Document Server

    Thornburg, Jonathan

    2011-01-01

    Suppose a small compact object (black hole or neutron star) of mass $m$ orbits a large black hole of mass $M \\gg m$. This system emits gravitational waves (GWs) that have a radiation-reaction effect on the particle's motion. EMRIs (extreme--mass-ratio inspirals) of this type will be important GW sources for LISA; LISA's data analysis will require highly accurate EMRI GW templates. In this article I outline the "Capra" research program to try to model EMRIs and calculate their GWs \\textit{ab initio}, assuming only that $m \\ll M$ and that the Einstein equations hold. Here we treat the EMRI spacetime as a perturbation of the large black hole's "background" (Schwarzschild or Kerr) spacetime and use the methods of black-hole perturbation theory, expanding in the small parameter $m/M$. The small body's motion can be described either as the result of a radiation-reaction "self-force" acting in the background spacetime or as geodesic motion in a perturbed spacetime. Several different lines of reasoning lead to the (s...

  3. AN INSTRUCTURAL SYSTEM MODEL OF COASTAL MANAGEMENT TO THE WATER RELATED HAZARDS IN CHINA

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Coastal lowlands have large areas of hazard impact and relativelylow capacity of prevention to the water related hazards,which have been indicated by the wide-spread flood hazards,high percentages of land with high flood vulnerability.Increasing population pressure and the shift of resources exploitation from land to sea will force more and more coastal lowlands to be developed in the future,further enhancing the danger of water-related hazards.In this paper,the coastal lowlands in the northern Jiangsu province,China,were selected as a case study.The Interpretation Structural Model (ISM) was employed to analyze the direct and indirect impacts among the elements within the system,and thereby,to identify the causal elements,middle linkages,their expressions,and relations.

  4. Three multimedia models used at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C. [Brookhaven National Lab., Upton, NY (United States); Rambaugh, J.O.; Potter, S. [Geraghty and Miller, Inc., Plainview, NY (United States)

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.

  5. Modelling tropical cyclone hazards under climate change scenario using geospatial techniques

    Science.gov (United States)

    Hoque, M. A.; Phinn, S.; Roelfsema, C.; Childs, I.

    2016-11-01

    Tropical cyclones are a common and devastating natural disaster in many coastal areas of the world. As the intensity and frequency of cyclones will increase under the most likely future climate change scenarios, appropriate approaches at local scales (1-5 km) are essential for producing sufficiently detailed hazard models. These models are used to develop mitigation plans and strategies for reducing the impacts of cyclones. This study developed and tested a hazard modelling approach for cyclone impacts in Sarankhola upazila, a 151 km2 local government area in coastal Bangladesh. The study integrated remote sensing, spatial analysis and field data to model cyclone generated hazards under a climate change scenario at local scales covering model integrating historical cyclone data and Digital Elevation Model (DEM) was used to generate the cyclone hazard maps for different cyclone return periods. Frequency analysis was carried out using historical cyclone data (1960--2015) to calculate the storm surge heights of 5, 10, 20, 50 and 100 year return periods of cyclones. Local sea level rise scenario of 0.34 m for the year 2050 was simulated with 20 and 50 years return periods. Our results showed that cyclone affected areas increased with the increase of return periods. Around 63% of study area was located in the moderate to very high hazard zones for 50 year return period, while it was 70% for 100 year return period. The climate change scenarios increased the cyclone impact area by 6-10 % in every return period. Our findings indicate this approach has potential to model the cyclone hazards for developing mitigation plans and strategies to reduce the future impacts of cyclones.

  6. DEM resolution effects on shallow landslide hazard and soil redistribution modelling

    NARCIS (Netherlands)

    Claessens, L.F.G.; Heuvelink, G.B.M.; Schoorl, J.M.; Veldkamp, A.

    2005-01-01

    In this paper we analyse the effects of digital elevation model (DEM) resolution on the results of a model that simulates spatially explicit relative shallow landslide hazard and soil redistribution patterns and quantities. We analyse distributions of slope, specific catchment area and relative haza

  7. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures w...

  8. Validation and evaluation of predicitive models in hazard assessment and risk management

    NARCIS (Netherlands)

    Beguería, S.

    2007-01-01

    The paper deals with the validation and evaluation of mathematical models in natural hazard analysis, with a special focus on establishing their predictive power. Although most of the tools and statistics available are common to general classification models, some peculiarites arise in the case of h

  9. Modelling long term survival with non-proportional hazards

    NARCIS (Netherlands)

    Perperoglou, Aristidis

    2006-01-01

    In this work I consider models for survival data when the assumption of proportionality does not hold. The thesis consists of an Introduction, five papers, a Discussion and an Appendix. The Introduction presents technical information about the Cox model and introduces the ideas behind the extensions

  10. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh

    2014-01-01

    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  11. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data. Howe

  12. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data. Howe

  13. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.

  14. Limits on Log Odds Ratios for Unidimensional Item Response Theory Models

    Science.gov (United States)

    Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip

    2007-01-01

    Bounds are established for log odds ratios (log cross-product ratios) involving pairs of items for item response models. First, expressions for bounds on log odds ratios are provided for one-dimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model. Results are…

  15. A REVIEW OF MULTI-HAZARD RISK ASSESSMENT (MHRA USING 4D DYNAMIC MODELS

    Directory of Open Access Journals (Sweden)

    T. Bibi

    2015-10-01

    Full Text Available This paper reviews the 4D dynamic models for multi natural hazard risk assessment. It is important to review the characteristic of the different dynamic models and to choose the most suitable model for certain application. The characteristic of the different 4D dynamic models are based on several main aspects (e.g. space, time, event or phenomenon etc.. The most suitable 4D dynamic model depends on the type of application it is used for. There is no single 4D Dynamic model suitable for all types of application. Therefore, it is very important to define the requirements of the 4D Dynamic model. The main context of this paper is spatio temporal modelling for multi hazards.

  16. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    Science.gov (United States)

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  17. Application of Satellite remote sensing for detailed landslide inventories using Frequency ratio model and GIS

    Directory of Open Access Journals (Sweden)

    Himan Shahabi

    2012-07-01

    Full Text Available This paper presents landslide susceptibility analysis in central Zab basin in the southwest mountainsides of West-Azerbaijan province in Iran using remotely sensed data and Geographic Information System. Landslide database was generated using satellite imagery and aerial photographs accompanied by field investigations using Differential Global Positioning System to generate a landslide inventory map. Digital elevation model (DEM was first constructed using GIS software. Nine landslide inducing factors were used for landslide vulnerability analysis: slope, slope aspect, distance to road, distance to drainage network, distance to fault, land use, Precipitation, Elevation, and geological factors. This study demonstrates the synergistic use of medium resolution, multitemporal Satellite pour lObservation de la Terre (SPOT, for prepare of landslide-inventory map and Landsat ETM+ for prepare of Land use. The post-classification comparison method using the Maximum Likelihood classifier with SPOT images was able to detect approximately 70% of landslides. Frequency ratio of each factor was computed using the above thematic factors with past landslide locations. It employs the landslide events as dependant variable and data layers as independent variable, and makes use of the correlation between these two factors in landslide zonation. Given the employed model and the variables, signification tests were implemented on each independent variable, and the degree of fitness of zonation map was estimated Landslide susceptibility map was produced using raster analysis. The landslide susceptibility map was classified into four classes: low, moderate, high and very high. The model is validated using the Relative landslide density index (R-index method. The final, landslide low hazard susceptibility map was drawn using frequency ratio. As a result, showed that the identified landslides were located in the class (51.37%, moderate (29.35%, high (11.10% and very high

  18. Estimation of Hazard Functions in the Log-Linear Age-Period-Cohort Model: Application to Lung Cancer Risk Associated with Geographical Area

    Directory of Open Access Journals (Sweden)

    Tengiz Mdzinarishvili

    2010-04-01

    Full Text Available An efficient computing procedure for estimating the age-specific hazard functions by the log-linear age-period-cohort (LLAPC model is proposed. This procedure accounts for the influence of time period and birth cohort effects on the distribution of age-specific cancer incidence rates and estimates the hazard function for populations with different exposures to a given categorical risk factor. For these populations, the ratio of the corresponding age-specific hazard functions is proposed for use as a measure of relative hazard. This procedure was used for estimating the risks of lung cancer (LC for populations living in different geographical areas. For this purpose, the LC incidence rates in white men and women, in three geographical areas (namely: San Francisco-Oakland, Connecticut and Detroit, collected from the SEER 9 database during 1975–2004, were utilized. It was found that in white men the averaged relative hazard (an average of the relative hazards over all ages of LC in Connecticut vs. San Francisco-Oakland is 1.31 ± 0.02, while in Detroit vs. San Francisco-Oakland this averaged relative hazard is 1.53 ± 0.02. In white women, analogous hazards in Connecticut vs. San Francisco-Oakland and Detroit vs. San Francisco-Oakland are 1.22 ± 0.02 and 1.32 ± 0.02, correspondingly. The proposed computing procedure can be used for assessing hazard functions for other categorical risk factors, such as gender, race, lifestyle, diet, obesity, etc.

  19. Deterministic slope failure hazard assessment in a model catchment and its replication in neighbourhood terrain

    Directory of Open Access Journals (Sweden)

    Kiran Prasad Acharya

    2016-01-01

    Full Text Available In this work, we prepare and replicate a deterministic slope failure hazard model in small-scale catchments of tertiary sedimentary terrain of Niihama city in western Japan. It is generally difficult to replicate a deterministic model from one catchment to another due to lack of exactly similar geo-mechanical and hydrological parameters. To overcome this problem, discriminant function modelling was done with the deterministic slope failure hazard model and the DEM-based causal factors of slope failure, which yielded an empirical parametric relationship or a discriminant function equation. This parametric relationship was used to predict the slope failure hazard index in a total of 40 target catchments in the study area. From ROC plots, the prediction rate between 0.719–0.814 and 0.704–0.805 was obtained with inventories of September and October slope failures, respectively. This means September slope failures were better predicted than October slope failures by approximately 1%. The results show that the prediction of the slope failure hazard index is possible, even in a small catchment scale, in similar geophysical settings. Moreover, the replication of the deterministic model through discriminant function modelling was found to be successful in predicting typhoon rainfall-induced slope failures with moderate to good accuracy without any use of geo-mechanical and hydrological parameters.

  20. Efficient pan-European flood hazard modelling through a combination of statistical and physical models

    Science.gov (United States)

    Paprotny, Dominik; Morales Nápoles, Oswaldo

    2016-04-01

    Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.

  1. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  2. A NOVEL SOFT COMPUTING MODEL ON LANDSLIDE HAZARD ZONE MAPPING

    Directory of Open Access Journals (Sweden)

    Iqbal Quraishi

    2012-11-01

    Full Text Available The effect of landslide is very prominent in India as well as world over. In India North-East region and all the areas beneath the Himalayan range is prone to landslide. As state wise Uttrakhand, Himachal Pradesh and northern part of West Bengal are identified as a risk zone for landslide. In West Bengal, Darjeeling area is identified as our focus zone. There are several types of landslides depending upon various conditions. Most contributing factor of landslide is Earthquakes. Both field and the GIS data are very versatile and large in amount. Creating a proper data warehouse includes both Remote and field studies. Our proposed soft computing model merge the field and remote sensing data and create an optimized landslide susceptible map of the zone and also provide a broad risk assessment. It takes into account census and economic survey data as an input to calculate and predict the probable number of damaged houses, roads, other amenities including the effect on GDP. The model is highly customizable and tends to provide situation specific results. A fuzzy logic based approach has been considered to partially implement the model in terms of different parameter data sets to show the effectiveness of the proposed model.

  3. On the predictive information criteria for model determination in seismic hazard analysis

    Science.gov (United States)

    Varini, Elisa; Rotondi, Renata

    2016-04-01

    Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point

  4. Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site

    Science.gov (United States)

    Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.

    2012-04-01

    The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological

  5. A Fault-based Crustal Deformation Model for UCERF3 and Its Implication to Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.

    2012-12-01

    shear zone and northern Walker Lane. This implies a significant increase in seismic hazard in the eastern California and northern Walker Lane region, but decreased seismic hazard in the southern San Andreas area, relative to the current model used in the USGS 2008 seismic hazard map evaluation. Overall the geodetic model suggests an increase in total regional moment rate of 24% compared with the UCERF2 model and the 150-yr California earthquake catalog. However not all the increases are seismic so the seismic/aseismic slip rate ratios are critical for future seismic hazard assessment.

  6. Medical Modeling of Particle Size Effects for CB Inhalation Hazards

    Science.gov (United States)

    2015-09-01

    of body orientation ( posture ) on deposition, and availability of 30 stochastic lung geometry for more realistic assessment of variation of dose in... hygiene community, with their role of monitoring and protecting workers in the workplace, has had a key role in developing standard models of...M.G., Miller, F.J. and Raabe, O.G. (1995). Particle Inhalability Curves for Humans and Small Laboratory Animals. Annals of Occupational Hygiene 39

  7. Incorporating induced seismicity in the 2014 United States National Seismic Hazard Model: results of the 2014 workshop and sensitivity studies

    Science.gov (United States)

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.

    2015-01-01

    The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after

  8. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  9. Modeling geomagnetic induction hazards using a 3-D electrical conductivity model of Australia

    Science.gov (United States)

    Wang, Liejun; Lewis, Andrew M.; Ogawa, Yasuo; Jones, William V.; Costelloe, Marina T.

    2016-12-01

    The surface electric field induced by external geomagnetic source fields is modeled for a continental-scale 3-D electrical conductivity model of Australia at periods of a few minutes to a few hours. The amplitude and orientation of the induced electric field at periods of 360 s and 1800 s are presented and compared to those derived from a simplified ocean-continent (OC) electrical conductivity model. It is found that the induced electric field in the Australian region is distorted by the heterogeneous continental electrical conductivity structures and surrounding oceans. On the northern coastlines, the induced electric field is decreased relative to the simple OC model due to a reduced conductivity contrast between the seas and the enhanced conductivity structures inland. In central Australia, the induced electric field is less distorted with respect to the OC model as the location is remote from the oceans, but inland crustal high-conductivity anomalies are the major source of distortion of the induced electric field. In the west of the continent, the lower conductivity of the Western Australia Craton increases the conductivity contrast between the deeper oceans and land and significantly enhances the induced electric field. Generally, the induced electric field in southern Australia, south of latitude -20°, is higher compared to northern Australia. This paper provides a regional indicator of geomagnetic induction hazards across Australia.

  10. An example of debris-flows hazard modeling using GIS

    Directory of Open Access Journals (Sweden)

    L. Melelli

    2004-01-01

    Full Text Available We present a GIS-based model for predicting debris-flows occurrence. The availability of two different digital datasets and the use of a Digital Elevation Model (at a given scale have greatly enhanced our ability to quantify and to analyse the topography in relation to debris-flows. In particular, analysing the relationship between debris-flows and the various causative factors provides new understanding of the mechanisms. We studied the contact zone between the calcareous basement and the fluvial-lacustrine infill adjacent northern area of the Terni basin (Umbria, Italy, and identified eleven basins and corresponding alluvial fans. We suggest that accumulations of colluvium in topographic hollows, whatever the sources might be, should be considered potential debris-flow source areas. In order to develop a susceptibility map for the entire area, an index was calculated from the number of initiation locations in each causative factor unit divided by the areal extent of that unit within the study area. This index identifies those units that produce the most debris-flows in each Representative Elementary Area (REA. Finally, the results are presented with the advantages and the disadvantages of the approach, and the need for further research.

  11. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    Science.gov (United States)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  12. Combining computational models for landslide hazard assessment of Guantánamo province, Cuba

    NARCIS (Netherlands)

    Castellanos Abella, E.A.

    2009-01-01

    As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial photo

  13. On estimation and tests of time-varying effects in the proportional hazards model

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Martinussen, Torben

    Grambsch and Therneau (1994) suggest to use Schoenfeld's Residuals to investigate whether some of the regression coefficients in Cox'(1972) proportional hazards model are time-dependent. Their method is a one-step procedure based on Cox' initial estimate. We suggest an algorithm which in the first...

  14. Combining computational models for landslide hazard assessment of Guantánamo province, Cuba

    NARCIS (Netherlands)

    Castellanos Abella, E.A.

    2008-01-01

    As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial

  15. Teen Court Referral, Sentencing, and Subsequent Recidivism: Two Proportional Hazards Models and a Little Speculation

    Science.gov (United States)

    Rasmussen, Andrew

    2004-01-01

    This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…

  16. The Capra Research Program for Modelling Extreme Mass Ratio Inspirals

    Science.gov (United States)

    Thornburg, Jonathan

    2011-02-01

    Suppose a small compact object (black hole or neutron star) of mass m orbits a large black hole of mass M ≫ m. This system emits gravitational waves (GWs) that have a radiation-reaction effect on the particle's motion. EMRIs (extreme-mass-ratio inspirals) of this type will be important GW sources for LISA. To fully analyze these GWs, and to detect weaker sources also present in the LISA data stream, will require highly accurate EMRI GW templates. In this article I outline the ``Capra'' research program to try to model EMRIs and calculate their GWs ab initio, assuming only that m ≪ M and that the Einstein equations hold. Because m ≪ M the timescale for the particle's orbit to shrink is too long for a practical direct numerical integration of the Einstein equations, and because this orbit may be deep in the large black hole's strong-field region, a post-Newtonian approximation would be inaccurate. Instead, we treat the EMRI spacetime as a perturbation of the large black hole's ``background'' (Schwarzschild or Kerr) spacetime and use the methods of black-hole perturbation theory, expanding in the small parameter m/M. The particle's motion can be described either as the result of a radiation-reaction ``self-force'' acting in the background spacetime or as geodesic motion in a perturbed spacetime. Several different lines of reasoning lead to the (same) basic O(m/M) ``MiSaTaQuWa'' equations of motion for the particle. In particular, the MiSaTaQuWa equations can be derived by modelling the particle as either a point particle or a small Schwarzschild black hole. The latter is conceptually elegant, but the former is technically much simpler and (surprisingly for a nonlinear field theory such as general relativity) still yields correct results. Modelling the small body as a point particle, its own field is singular along the particle worldline, so it's difficult to formulate a meaningful ``perturbation'' theory or equations of motion there. Detweiler and Whiting found

  17. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Science.gov (United States)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  18. Building a risk-targeted regional seismic hazard model for South-East Asia

    Science.gov (United States)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  19. Seismic hazard assessment for Myanmar: Earthquake model database, ground-motion scenarios, and probabilistic assessments

    Science.gov (United States)

    Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.

    2015-12-01

    We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

  20. Using the RBFN model and GIS technique to assess wind erosion hazards of Inner Mongolia, China

    Science.gov (United States)

    Shi, Huading; Liu, Jiyuan; Zhuang, Dafang; Hu, Yunfeng

    2006-08-01

    Soil wind erosion is the primary process and the main driving force for land desertification and sand-dust storms in arid and semi-arid areas of Northern China. Many researchers have paid more attention to this issue. This paper select Inner Mongolia autonomous region as the research area, quantify the various indicators affecting the soil wind erosion, using the GIS technology to extract the spatial data, and construct the RBFN (Radial Basis Function Network) model for assessment of wind erosion hazard. After training the sample data of the different levels of wind erosion hazard, we get the parameters of the model, and then assess the wind erosion hazard. The result shows that in the Southern parts of Inner Mongolia wind erosion hazard are very severe, counties in the middle regions of Inner Mongolia vary from moderate to severe, and in eastern are slight. The comparison of the result with other researches shows that the result is in conformity with actual conditions, proving the reasonability and applicability of the RBFN model.

  1. [download] (1035Coordinate Descent Methods for the Penalized Semiparametric Additive Hazards Model

    Directory of Open Access Journals (Sweden)

    Anders Gorst-Rasmussen

    2012-04-01

    Full Text Available For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires no nonlinear optimization steps and offers excellent performance and stability. An implementation is available in the R package ahaz. We demonstrate this implementation in a small timing study and in an application to real data.

  2. Prospects of ratio and differential ({\\delta}) ratio based measurement-models: a case study for IRMS evaluation

    CERN Document Server

    Datta, B P

    2015-01-01

    The suitability of a mathematical-model Y = f({Xi}) in serving a purpose whatsoever (should be preset by the function f specific input-to-output variation-rates, i.e.) can be judged beforehand. We thus evaluate here the two apparently similar models: YA = fA(SRi,WRi) = (SRi/WRi) and: YD = fd(SRi,WRi) = ([SRi,WRi] - 1) = (YA - 1), with SRi and WRi representing certain measurable-variables (e.g. the sample S and the working-lab-reference W specific ith-isotopic-abundance-ratios, respectively, for a case as the isotope ratio mass spectrometry (IRMS)). The idea is to ascertain whether fD should represent a better model than fA, specifically, for the well-known IRMS evaluation. The study clarifies that fA and fD should really represent different model-families. For example, the possible variation, eA, of an absolute estimate as the yA (and/ or the risk of running a machine on the basis of the measurement-model fA) should be dictated by the possible Ri-measurement-variations (u_S and u_W) only: eA = (u_S + u_W); i....

  3. Lattice Boltzmann modeling of multiphase flows at large density ratio with an improved pseudopotential model

    CERN Document Server

    Li, Q; Li, X J

    2012-01-01

    Owing to its conceptual simplicity and computational efficiency, the pseudopotential multiphase lattice Boltzmann (LB) model has attracted significant attention since its emergence. In this work, we aim to extend the pseudopotential LB model to the simulations of multiphase flows at large density ratio and relatively high Reynolds number. First, based on our recent work [Li et al., Phys. Rev. E. 86, 016709 (2012)], an improved forcing scheme is proposed for the multiple-relaxation-time (MRT) pseudopotential LB model in order to achieve thermodynamic consistency and large density ratio in the model. Next, through investigating the effects of the parameter a in the Carnahan-Starling equation of state, we find that, as compared with a = 1, a = 0.25 is capable of greatly reducing the magnitude of the spurious currents at large density ratio. Furthermore, it is found that a lower liquid viscosity can be gained in the pseudopotential LB model by increasing the kinematic viscosity ratio between the vapor and liquid ...

  4.  The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study the...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients...

  5. Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards

    Energy Technology Data Exchange (ETDEWEB)

    Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J

    2007-11-26

    This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.

  6. Patterns of Risk Using an Integrated Spatial Multi-Hazard Model (PRISM Model)

    Science.gov (United States)

    Multi-hazard risk assessment has long centered on small scale needs, whereby a single community or group of communities’ exposures are assessed to determine potential mitigation strategies. While this approach has advanced the understanding of hazard interactions, it is li...

  7. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.

    Science.gov (United States)

    Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C

    2015-09-08

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making.

  8. Deriving global flood hazard maps of fluvial floods through a physical model cascade

    Science.gov (United States)

    Pappenberger, Florian; Dutra, Emanuel; Wetterhall, Fredrik; Cloke, Hannah L.

    2013-04-01

    Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979-2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.

  9. Deriving global flood hazard maps of fluvial floods through a physical model cascade

    Directory of Open Access Journals (Sweden)

    F. Pappenberger

    2012-11-01

    Full Text Available Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (reinsurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010 simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25 × 25 km grid, which is then reprojected onto a 1 × 1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25 × 25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.

  10. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Omid Boyer

    2013-01-01

    Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.

  11. Geoinformational prognostic model of mudflows hazard and mudflows risk for the territory of Ukrainian Carpathians

    Science.gov (United States)

    Chepurna, Tetiana B.; Kuzmenko, Eduard D.; Chepurnyj, Igor V.

    2017-06-01

    The article is devoted to the geological issue of the space-time regional prognostication of mudflow hazard. The methodology of space-time prediction of mudflows hazard by creating GIS predictive model has been developed. Using GIS technologies the relevant and representative complex of significant influence of spatial and temporal factors, adjusted to use in the regional prediction of mudflows hazard, were selected. Geological, geomorphological, technological, climatic, and landscape factors have been selected as spatial mudflow factors. Spatial analysis is based on detection of a regular connection of spatial factor characteristics with spatial distribution of the mudflow sites. The function of a standard complex spatial index (SCSI) of the probability of the mudflow sites distribution has been calculated. The temporal, long-term prediction of the mudflows activity was based on the hypothesis of the regular reiteration of natural processes. Heliophysical, seismic, meteorological, and hydrogeological factors have been selected as time mudflow factors. The function of a complex index of long standing mudflow activity (CIMA) has been calculated. The prognostic geoinformational model of mudflow hazard up to 2020 year, a year of the next peak of the mudflows activity, has been created. Mudflow risks have been counted and carogram of mudflow risk assessment within the limits of administrative-territorial units has been built for 2020 year.

  12. Critical load analysis in hazard assessment of metals using a Unit World Model.

    Science.gov (United States)

    Gandhi, Nilima; Bhavsar, Satyendra P; Diamond, Miriam L

    2011-09-01

    A Unit World approach has been used extensively to rank chemicals for their hazards and to understand differences in chemical behavior. Whereas the fate and effects of an organic chemical in a Unit World Model (UWM) analysis vary systematically according to one variable (fraction of organic carbon), and the chemicals have a singular ranking regardless of environmental characteristics, metals can change their hazard ranking according to freshwater chemistry, notably pH and dissolved organic carbon (DOC). Consequently, developing a UWM approach for metals requires selecting a series of representative freshwater chemistries, based on an understanding of the sensitivity of model results to this chemistry. Here we analyze results from a UWM for metals with the goal of informing the selection of appropriate freshwater chemistries for a UWM. The UWM loosely couples the biotic ligand model (BLM) to a geochemical speciation model (Windermere Humic Adsorption Model [WHAM]) and then to the multi-species fate transport-speciation (Transpec) model. The UWM is applied to estimate the critical load (CL) of cationic metals Cd, Cu, Ni, Pb, and Zn, using three lake chemistries that vary in trophic status, pH, and other parameters. The model results indicated a difference of four orders of magnitude in particle-to-total dissolved partitioning (K(d)) that translated into minimal differences in fate because of the short water residence time used. However, a maximum 300-fold difference was calculated in Cu toxicity among the three chemistries and three aquatic organisms. Critical loads were lowest (greatest hazard) in the oligotrophic water chemistry and highest (least hazard) in the eutrophic water chemistry, despite the highest fraction of free metal ion as a function of total metal occurring in the mesotrophic system, where toxicity was ameliorated by competing cations. Water hardness, DOC, and pH had the greatest influence on CL, because of the influence of these factors on aquatic

  13. Integrating fault and seismological data into a probabilistic seismic hazard model for Italy.

    Science.gov (United States)

    Valentini, Alessandro; Visini, Francesco; Pace, Bruno

    2017-04-01

    We present the results of new probabilistic seismic hazard analysis (PSHA) for Italy based on active fault and seismological data. Combining seismic hazard from active fault with distributed seismic sources (where there are no data on active faults) is the backbone of this work. Far away from identifying a best procedure, currently adopted approaches combine active faults and background sources applying a threshold magnitude, generally between 5.5 and 7, over which seismicity is modelled by faults, and under which is modelled by distributed sources or area sources. In our PSHA we (i) apply a new method for the treatment of geologic data of major active faults and (ii) propose a new approach to combine these data with historical seismicity to evaluate PSHA for Italy. Assuming that deformation is concentrated in correspondence of fault, we combine the earthquakes occurrences derived from the geometry and slip rates of the active faults with the earthquakes from the spatially smoothed earthquake sources. In the vicinity of an active fault, the smoothed seismic activity is gradually reduced by a fault-size driven factor. Even if the range and gross spatial distribution of expected accelerations obtained in our work are comparable to the ones obtained through methods applying seismic catalogues and classical zonation models, the main difference is in the detailed spatial pattern of our PSHA model: our model is characterized by spots of more hazardous area, in correspondence of mapped active faults, while the previous models give expected accelerations almost uniformly distributed in large regions. Finally, we investigate the impact due to the earthquake rates derived from two magnitude-frequency distribution (MFD) model for faults on the hazard result and in respect to the contribution of faults versus distributed seismic activity.

  14. Mare basalt genesis - Modeling trace elements and isotopic ratios

    Science.gov (United States)

    Binder, A. B.

    1985-11-01

    Various types of mare basalt data have been synthesized, leading to the production of an internally consistent model of the mare basalt source region and mare basalt genesis. The model accounts for the mineralogical, major oxide, compatible siderophile trace element, incompatible trace element, and isotopic characteristics of most of the mare basalt units and of all the pyroclastic glass units for which reliable data are available. Initial tests of the model show that it also reproduces the mineralogy and incompatible trace element characteristics of the complementary highland anorthosite suite of rocks and, in a general way, those of the lunar granite suite of rocks.

  15. Review and perspectives: Understanding natural-hazards-generated ionospheric perturbations using GPS measurements and coupled modeling

    Science.gov (United States)

    Komjathy, Attila; Yang, Yu-Ming; Meng, Xing; Verkhoglyadova, Olga; Mannucci, Anthony J.; Langley, Richard B.

    2016-07-01

    Natural hazards including earthquakes, volcanic eruptions, and tsunamis have been significant threats to humans throughout recorded history. Global navigation satellite systems (GNSS; including the Global Positioning System (GPS)) receivers have become primary sensors to measure signatures associated with natural hazards. These signatures typically include GPS-derived seismic deformation measurements, coseismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure, model, and monitor postseismic ionospheric disturbances caused by, e.g., earthquakes, volcanic eruptions, and tsunamis. In this paper, we review research progress at the Jet Propulsion Laboratory and elsewhere using examples of ground-based and spaceborne observation of natural hazards that generated TEC perturbations. We present results for state-of-the-art imaging using ground-based and spaceborne ionospheric measurements and coupled atmosphere-ionosphere modeling of ionospheric TEC perturbations. We also report advancements and chart future directions in modeling and inversion techniques to estimate tsunami wave heights and ground surface displacements using TEC measurements and error estimates. Our initial retrievals strongly suggest that both ground-based and spaceborne GPS remote sensing techniques could play a critical role in detection and imaging of the upper atmosphere signatures of natural hazards including earthquakes and tsunamis. We found that combining ground-based and spaceborne measurements may be crucial in estimating critical geophysical parameters such as tsunami wave heights and ground surface displacements using TEC observations. The GNSS-based remote sensing of natural-hazard-induced ionospheric disturbances could be applied to and used in operational tsunami and earthquake early warning systems.

  16. Techniques, advances, problems and issues in numerical modelling of landslide hazard

    CERN Document Server

    Van Asch, Theo; Van Beek, Ludovicus; Amitrano, David

    2007-01-01

    Slope movements (e.g. landslides) are dynamic systems that are complex in time and space and closely linked to both inherited and current preparatory and triggering controls. It is not yet possible to assess in all cases conditions for failure, reactivation and rapid surges and successfully simulate their transient and multi-dimensional behaviour and development, although considerable progress has been made in isolating many of the key variables and elementary mechanisms and to include them in physically-based models for landslide hazard assessments. Therefore, the objective of this paper is to review the state-of-the-art in the understanding of landslide processes and to identify some pressing challenges for the development of our modelling capabilities in the forthcoming years for hazard assessment. This paper focuses on the special nature of slope movements and the difficulties related to simulating their complex time-dependent behaviour in mathematical, physically-based models. It analyses successively th...

  17. A meta-analysis of adjusted hazard ratios from 20 observational studies of bilateral versus single internal thoracic artery coronary artery bypass grafting.

    Science.gov (United States)

    Takagi, Hisato; Goto, Shin-nosuke; Watanabe, Taku; Mizuno, Yusuke; Kawai, Norikazu; Umemoto, Takuya

    2014-10-01

    In 2001, a landmark meta-analysis of bilateral internal thoracic artery (BITA) versus single internal thoracic artery (SITA) coronary artery bypass grafting for long-term survival included 7 observational studies (only 3 of which reported adjusted hazard ratios [HRs]) enrolling approximately 16,000 patients. Updating the previous meta-analysis to determine whether BITA grafting reduces long-term mortality relative to SITA grafting, we exclusively abstracte, then combined in a meta-analysis, adjusted (not unadjusted) HRs from observational studies. MEDLINE and EMBASE were searched until September 2013. Eligible studies were observational studies of BITA versus SITA grafting and reporting an adjusted HR for long-term (≥4 years) mortality as an outcome. Meta-regression analyses were performed to determine whether the effects of BITA grafting were modulated by the prespecified factors. Twenty observational studies enrolling 70,897 patients were identified and included. A pooled analysis suggested a significant reduction in long-term mortality with BITA relative to SITA grafting (HR, 0.80; 95% confidence interval, 0.77 to 0.84). When data from 6 pedicled and 6 skeletonized internal thoracic artery studies were separately pooled, BITA grafting was associated with a statistically significant 26% and 16% reduction in mortality relative to SITA grafting, respectively (P for subgroup differences=.04). A meta-regression coefficient was significantly negative for the proportion of men (-0.00960; -0.01806 to -0.00114). Based on an updated meta-analysis of exclusive adjusted HRs from 20 observational studies enrolling more than 70,000 patients, BITA grafting seems to significantly reduce long-term mortality. As the proportion of men increases, BITA grafting is more beneficial in reducing mortality. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  18. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  19. Measurement, geospatial, and mechanistic models of public health hazard vulnerability and jurisdictional risk.

    Science.gov (United States)

    Testa, Marcia A; Pettigrew, Mary L; Savoia, Elena

    2014-01-01

    County and state health departments are increasingly conducting hazard vulnerability and jurisdictional risk (HVJR) assessments for public health emergency preparedness and mitigation planning and evaluation to improve the public health disaster response; however, integration and adoption of these assessments into practice are still relatively rare. While the quantitative methods associated with complex analytic and measurement methods, causal inference, and decision theory are common in public health research, they have not been widely used in public health preparedness and mitigation planning. To address this gap, the Harvard School of Public Health PERLC's goal was to develop measurement, geospatial, and mechanistic models to aid public health practitioners in understanding the complexity of HVJR assessment and to determine the feasibility of using these methods for dynamic and predictive HVJR analyses. We used systematic reviews, causal inference theory, structural equation modeling (SEM), and multivariate statistical methods to develop the conceptual and mechanistic HVJR models. Geospatial mapping was used to inform the hypothetical mechanistic model by visually examining the variability and patterns associated with county-level demographic, social, economic, hazards, and resource data. A simulation algorithm was developed for testing the feasibility of using SEM estimation. The conceptual model identified the predictive latent variables used in public health HVJR tools (hazard, vulnerability, and resilience), the outcomes (human, physical, and economic losses), and the corresponding measurement subcomponents. This model was translated into a hypothetical mechanistic model to explore and evaluate causal and measurement pathways. To test the feasibility of SEM estimation, the mechanistic model path diagram was translated into linear equations and solved simultaneously using simulated data representing 192 counties. Measurement, geospatial, and mechanistic

  20. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    Science.gov (United States)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  1. Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models

    Science.gov (United States)

    Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges

    2016-04-01

    The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model

  2. Dynamic modelling of a forward osmosis-nanofiltration integrated process for treating hazardous wastewater.

    Science.gov (United States)

    Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik

    2016-11-01

    Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R (2) > 0.98), low relative error (forward osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.

  3. Rockfall Hazard Analysis From Discrete Fracture Network Modelling with Finite Persistence Discontinuities

    Science.gov (United States)

    Lambert, Cédric; Thoeni, Klaus; Giacomini, Anna; Casagrande, Davide; Sloan, Scott

    2012-09-01

    Developing an accurate representation of the rock mass fabric is a key element in rock fall hazard analysis. The orientation, persistence and density of fractures control the volume and shape of unstable blocks or compartments. In this study, the discrete fracture modelling technique and digital photogrammetry were used to accurately depict the fabric. A volume distribution of unstable blocks was derived combining polyhedral modelling and kinematic analyses. For each block size, probabilities of failure and probabilities of propagation were calculated. A complete energy distribution was obtained by considering, for each block size, its occurrence in the rock mass, its probability of falling, its probability to reach a given location, and the resulting distribution of energies at each location. This distribution was then used with an energy-frequency diagram to assess the hazard.

  4. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    's dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude......We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup...

  5. Model and Method for Multiobjective Time-Dependent Hazardous Material Transportation

    Directory of Open Access Journals (Sweden)

    Zhen Zhou

    2014-01-01

    Full Text Available In most of the hazardous material transportation problems, risk factors are assumed to be constant, which ignores the fact that they can vary with time throughout the day. In this paper, we deal with a novel time-dependent hazardous material transportation problem via lane reservation, in which the dynamic nature of transportation risk in the real-life traffic environment is taken into account. We first develop a multiobjective mixed integer programming (MIP model with two conflicting objectives: minimizing the impact on the normal traffic resulting from lane reservation and minimizing the total transportation risk. We then present a cut-and-solve based ε-constraint method to solve this model. Computational results indicate that our method outperforms the ε-constraint method based on optimization software package CPLEX.

  6. The Impact of the Subduction Modeling Beneath Calabria on Seismic Hazard

    Science.gov (United States)

    Morasca, P.; Johnson, W. J.; Del Giudice, T.; Poggi, P.; Traverso, C.; Parker, E. J.

    2014-12-01

    The aim of this work is to better understand the influence of subduction beneath Calabria on seismic hazard, as very little is known about present-day kinematics and the seismogenic potential of the slab interface in the Calabrian Arc region. This evaluation is significant because, depending on stress conditions, subduction zones can vary from being fully coupled to almost entirely decoupled with important consequences in the seismic hazard assessment. Although the debate is still open about the current kinematics of the plates and microplates lying in the region and the degree of coupling of Ionian lithosphere beneath Calabria, GPS data suggest that this subduction is locked in its interface sector. Also the lack of instrumentally recorded thrust earthquakes suggests this zone is locked. The current seismotectonic model developed for the Italian National territory is simplified in this area and does not reflect the possibility of locked subduction beneath the Calabria that could produce infrequent, but very large earthquakes associated with the subduction interface. Because of this we have conducted an independent seismic source analysis to take into account the influence of subduction as part of a regional seismic hazard analysis. Our final model includes two separate provinces for the subduction beneath the Calabria: inslab and interface. From a geometrical point of view the interface province is modeled with a depth between 20-50 km and a dip of 20°, while the inslab one dips 70° between 50 -100 km. Following recent interpretations we take into account that the interface subduction is possibly locked and, in such a case, large events could occur as characteristic earthquakes. The results of the PSHA analysis show that the subduction beneath the Calabrian region has an influence in the total hazard for this region, especially for long return periods. Regional seismotectonic models for this region should account for subduction.

  7. Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection

    Science.gov (United States)

    Messing, B. S.

    1980-01-01

    A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.

  8. Assessing Glacial Lake Outburst Flood Hazard in the Nepal Himalayas using Satellite Imagery and Hydraulic Models

    Science.gov (United States)

    Rounce, D.; McKinney, D. C.

    2015-12-01

    The last half century has witnessed considerable glacier melt that has led to the formation of large glacial lakes. These glacial lakes typically form behind terminal moraines comprising loose boulders, debris, and soil, which are susceptible to fail and cause a glacial lake outburst flood (GLOF). These lakes also act as a heat sink that accelerates glacier melt and in many cases is accompanied by rapid areal expansion. As these glacial lakes continue to grow, their hazard also increases due to the increase in potential flood volume and the lakes' proximity to triggering events such as avalanches and landslides. Despite the large threat these lakes may pose to downstream communities, there are few detailed studies that combine satellite imagery with hydraulic models to present a holistic understanding of the GLOF hazard. The aim of this work is to assess the GLOF hazard of glacial lakes in Nepal using a holistic approach based on a combination of satellite imagery and hydraulic models. Imja Lake will be the primary focus of the modeling efforts, but the methods will be developed in a manner that is transferable to other potentially dangerous glacial lakes in Nepal.

  9. Benchmarking Computational Fluid Dynamics Models for Application to Lava Flow Simulations and Hazard Assessment

    Science.gov (United States)

    Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.

    2015-12-01

    Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.

  10. Fault Tolerance Automotive Air-Ratio Control Using Extreme Learning Machine Model Predictive Controller

    OpenAIRE

    Pak Kin Wong; Hang Cheong Wong; Chi Man Vong; Tong Meng Iong; Ka In Wong; Xianghui Gao

    2015-01-01

    Effective air-ratio control is desirable to maintain the best engine performance. However, traditional air-ratio control assumes the lambda sensor located at the tail pipe works properly and relies strongly on the air-ratio feedback signal measured by the lambda sensor. When the sensor is warming up during cold start or under failure, the traditional air-ratio control no longer works. To address this issue, this paper utilizes an advanced modelling technique, kernel extreme learning machine (...

  11. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    Science.gov (United States)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  12. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  13. Global Hydrological Hazard Evaluation System (Global BTOP) Using Distributed Hydrological Model

    Science.gov (United States)

    Gusyev, M.; Magome, J.; Hasegawa, A.; Takeuchi, K.

    2015-12-01

    A global hydrological hazard evaluation system based on the BTOP models (Global BTOP) is introduced and quantifies flood and drought hazards with simulated river discharges globally for historical, near real-time monitoring and climate change impact studies. The BTOP model utilizes a modified topographic index concept and simulates rainfall-runoff processes including snowmelt, overland flow, soil moisture in the root and unsaturated zones, sub-surface flow, and river flow routing. The current global BTOP is constructed from global data on 10-min grid and is available to conduct river basin analysis on local, regional, and global scale. To reduce the impact of a coarse resolution, topographical features of global BTOP were obtained using river network upscaling algorithm that preserves fine resolution characteristics of 3-arcsec HydroSHEDS and 30-arcsec Hydro1K datasets. In addition, GLCC-IGBP land cover (USGS) and the DSMW(FAO) were used for the root zone depth and soil properties, respectively. The long-term seasonal potential evapotranspiration within BTOP model was estimated by the Shuttleworth-Wallace model using climate forcing data CRU TS3.1 and a GIMMS-NDVI(UMD/GLCF). The global BTOP was run with globally available precipitation such APHRODITE dataset and showed a good statistical performance compared to the global and local river discharge data in the major river basins. From these simulated daily river discharges at each grid, the flood peak discharges of selected return periods were obtained using the Gumbel distribution with L-moments and the hydrological drought hazard was quantified using standardized runoff index (SRI). For the dynamic (near real-time) applications, the global BTOP model is run with GSMaP-NRT global precipitation and simulated daily river discharges are utilized in a prototype near-real time discharge simulation system (GFAS-Streamflow), which is used to issue flood peak discharge alerts globally. The global BTOP system and GFAS

  14. Perspectives on open access high resolution digital elevation models to produce global flood hazard layers

    Science.gov (United States)

    Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark

    2015-12-01

    Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.

  15. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  16. Assessing rainfall triggered landslide hazards through physically based models under uncertainty

    Science.gov (United States)

    Balin, D.; Metzger, R.; Fallot, J. M.; Reynard, E.

    2009-04-01

    Hazard and risk assessment require, besides good data, good simulation capabilities to allow prediction of events and their consequences. The present study introduces a landslide hazards assessment strategy based on the coupling of hydrological physically based models with slope stability models that should be able to cope with uncertainty of input data and model parameters. The hydrological model used is based on the Water balance Simulation Model, WASIM-ETH (Schulla et al., 1997), a fully distributed hydrological model that has been successfully used previously in the alpine regions to simulate runoff, snowmelt, glacier melt, and soil erosion and impact of climate change on these. The study region is the Vallon de Nant catchment (10km2) in the Swiss Alps. A sound sensitivity analysis will be conducted in order to choose the discretization threshold derived from a Laser DEM model, to which the hydrological model yields the best compromise between performance and time computation. The hydrological model will be further coupled with slope stability methods (that use the topographic index and the soil moisture such as derived from the hydrological model) to simulate the spatial distribution of the initiation areas of different geomorphic processes such as debris flows and rainfall triggered landslides. To calibrate the WASIM-ETH model, the Monte Carlo Markov Chain Bayesian approach is privileged (Balin, 2004, Schaefli et al., 2006). The model is used in a single and a multi-objective frame to simulate discharge and soil moisture with uncertainty at representative locations. This information is further used to assess the potential initial areas for rainfall triggered landslides and to study the impact of uncertain input data, model parameters and simulated responses (discharge and soil moisture) on the modelling of geomorphological processes.

  17. A median voter model of health insurance with ex post moral hazard.

    Science.gov (United States)

    Jacob, Johanna; Lundin, Douglas

    2005-03-01

    One of the main features of health insurance is moral hazard, as defined by Pauly [Pauly, M.V., 1968. The economics of moral hazard: comment. American Economic Review 58, 531-537), people face incentives for excess utilization of medical care since they do not pay the full marginal cost for provision. To mitigate the moral hazard problem, a coinsurance can be included in the insurance contract. But health insurance is often publicly provided. Having a uniform coinsurance rate determined in a political process is quite different from having different rates varying in accordance with one's preferences, as is possible with private insurance. We construct a political economy model in order to characterize the political equilibrium and answer questions like: "Under what conditions is there a conflict in society on what coinsurance rate should be set?" and "Which groups of individuals will vote for a higher and lower than equilibrium coinsurance rate, respectively?". We also extend our basic model and allow people to supplement the coverage provided by the government with private insurance. Then, we answer two questions: "Who will buy the additional coverage?" and "How do the coinsurance rates people are now faced with compare with the rates chosen with pure private provision?".

  18. Near-Field Probabilistic Seismic Hazard Analysis of Metropolitan Tehran Using Region-Specific Directivity Models

    Science.gov (United States)

    Yazdani, Azad; Nicknam, Ahmad; Dadras, Ehsan Yousefi; Eftekhari, Seyed Nasrollah

    2017-01-01

    Ground motions are affected by directivity effects at near-fault regions which result in low-frequency cycle pulses at the beginning of the velocity time history. The directivity features of near-fault ground motions can lead to significant increase in the risk of earthquake-induced damage on engineering structures. The ordinary probabilistic seismic hazard analysis (PSHA) does not take into account such effects; recent studies have thus proposed new frameworks to incorporate directivity effects in PSHA. The objective of this study is to develop the seismic hazard mapping of Tehran City according to near-fault PSHA procedure for different return periods. To this end, the directivity models required in the modified PSHA were developed based on a database of the simulated ground motions. The simulated database was used in this study because there are no recorded near-fault data in the region to derive purely empirically based pulse prediction models. The results show that the directivity effects can significantly affect the estimate of regional seismic hazard.

  19. Beyond Flood Hazard Maps: Detailed Flood Characterization with Remote Sensing, GIS and 2d Modelling

    Science.gov (United States)

    Santillan, J. R.; Marqueso, J. T.; Makinano-Santillan, M.; Serviano, J. L.

    2016-09-01

    Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS) and Geographic Information System (GIS) are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D) flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers) that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.

  20. BEYOND FLOOD HAZARD MAPS: DETAILED FLOOD CHARACTERIZATION WITH REMOTE SENSING, GIS AND 2D MODELLING

    Directory of Open Access Journals (Sweden)

    J. R. Santillan

    2016-09-01

    Full Text Available Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS and Geographic Information System (GIS are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.

  1. Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise

    Science.gov (United States)

    Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.

    2014-12-01

    A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.

  2. Oxygen and hydrogen isotope ratios in tree rings: how well do models predict observed values?

    CSIR Research Space (South Africa)

    Waterhouse, JS

    2002-07-30

    Full Text Available the trunk, it is proficient to model the observed annual values of oxygen isotope ratios of alpha-cellulose to a significant level (r = 0.77, P < 0.01). When the same model is applied to hydrogen isotope ratios, results are found, and predictions can be made...

  3. Asymptotics on Semiparametric Analysis of Multivariate Failure Time Data Under the Additive Hazards Model

    Institute of Scientific and Technical Information of China (English)

    Huan-bin Liu; Liu-quan Sun; Li-xing Zhu

    2005-01-01

    Many survival studies record the times to two or more distinct failures on each subject. The failures may be events of different natures or may be repetitions of the same kind of event. In this article, we consider the regression analysis of such multivariate failure time data under the additive hazards model. Simple weighted estimating functions for the regression parameters are proposed, and asymptotic distribution theory of the resulting estimators are derived. In addition, a class of generalized Wald and generalized score statistics for hypothesis testing and model selection are presented, and the asymptotic properties of these statistics are examined.

  4. Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California

    Science.gov (United States)

    Pike, Richard J.; Graymer, Russell W.

    2008-01-01

    With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven

  5. Flood Hazard Mapping by Using Geographic Information System and Hydraulic Model: Mert River, Samsun, Turkey

    Directory of Open Access Journals (Sweden)

    Vahdettin Demir

    2016-01-01

    Full Text Available In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS. In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1 digitization of topographical data and preparation of digital elevation model using ArcGIS, (2 simulation of flood lows of different return periods using a hydraulic model (HEC-RAS, and (3 preparation of flood risk maps by integrating the results of (1 and (2.

  6. Model Persamaan Massa Karbon Akar Pohon dan Root-Shoot Ratio Massa Karbon (Equation Models of Tree Root Carbon Mass and Root-Shoot Carbon Mass Ratio

    Directory of Open Access Journals (Sweden)

    Elias .

    2011-03-01

    Full Text Available The case study was conducted in the area of Acacia mangium plantation at BKPH Parung Panjang, KPH Bogor. The objective of the study was to formulate equation models of tree root carbon mass and root to shoot carbon mass ratio of the plantation. It was found that carbon content in the parts of tree biomass (stems, branches, twigs, leaves, and roots was different, in which the highest and the lowest carbon content was in the main stem of the tree and in the leaves, respectively. The main stem and leaves of tree accounted for 70% of tree biomass. The root-shoot ratio of root biomass to tree biomass above the ground and the root-shoot ratio of root biomass to main stem biomass was 0.1443 and 0.25771, respectively, in which 75% of tree carbon mass was in the main stem and roots of tree. It was also found that the root-shoot ratio of root carbon mass to tree carbon mass above the ground and the root-shoot ratio of root carbon mass to tree main stem carbon mass was 0.1442 and 0.2034, respectively. All allometric equation models of tree root carbon mass of A. mangium have a high goodness-of-fit as indicated by its high adjusted R2.Keywords: Acacia mangium, allometric, root-shoot ratio, biomass, carbon mass

  7. Use of agent-based modelling in emergency management under a range of flood hazards

    Directory of Open Access Journals (Sweden)

    Tagg Andrew

    2016-01-01

    Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.

  8. Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors

    Institute of Scientific and Technical Information of China (English)

    Xiao-bing ZHAO; Xian ZHOU

    2012-01-01

    This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.

  9. An approach for modeling thermal destruction of hazardous wastes in circulating fluidized bed incinerator.

    Science.gov (United States)

    Patil, M P; Sonolikar, R L

    2008-10-01

    This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.

  10. MODELING OF THE BUILDING LOCAL PROTECTION (SHELTER – IN PLACE INCLUDING SORBTION OF THE HAZARDOUS CONTAMINANT ON INDOOR SURFACES

    Directory of Open Access Journals (Sweden)

    N. N. Belyayev

    2014-05-01

    Full Text Available Purpose. Chemically hazardous objects, where toxic substances are used, manufactured and stored, and also main lines, on which the hazardous materials transportation is conducted, pose potential sources of atmosphere accidental pollution.Development of the CFD model for evaluating the efficiency of the building local protection from hazardous substantives ingress by using air curtain and sorption/desorption of hazardous substance on indoor surfaces. Methodology. To solve the problem of hydrodynamic interaction of the air curtain with wind flow and considering the building influence on this process the model of ideal fluid is used. In order to calculate the transfer process of the hazardous substance in the atmosphere an equation of convection-diffusion transport of impurities is applied. To calculate the process of indoors air pollution under leaking of foul air Karisson & Huber model is used. This model takes into account the sorption of the hazardous substance at various indoors surfaces. For the numerical integration of the model equations differential methods are used. Findings. In this paper we construct an efficient CFD model of evaluating the effectiveness of the buildings protection against ingress of hazardous substances through the use of an air curtain. On the basis of the built model a computational experiment to assess the effectiveness of this protection method under varying the location of the air curtain relative to the building was carried out. Originality. A new model was developed to compute the effectiveness of the air curtain supply to reduce the toxic chemical concentration inside the building. Practical value. The developed model can be used for design of the building local protection against ingress of hazardous substances.

  11. Developing a simplified geographical information system approach to dilute lahar modelling for rapid hazard assessment

    Science.gov (United States)

    Darnell, A. R.; Phillips, J. C.; Barclay, J.; Herd, R. A.; Lovett, A. A.; Cole, P. D.

    2013-04-01

    In this study, we present a geographical information system (GIS)-based approach to enable the estimation of lahar features important to rapid hazard assessment (including flow routes, velocities and travel times). Our method represents a simplified first stage in extending the utility of widely used existing GIS-based inundation models, such as LAHARZ, to provide estimates of flow speeds. LAHARZ is used to determine the spatial distribution of a lahar of constant volume, and for a given cell in a GIS grid, a single-direction flow routing technique incorporating the effect of surface roughness directs the flow according to steepest descent. The speed of flow passing through a cell is determined from coupling the flow depth, change in elevation and roughness using Manning's formula, and in areas where there is little elevation difference, flow is routed to locally maximum increase in velocity. Application of this methodology to lahars on Montserrat, West Indies, yielded support for this GIS-based approach as a hazard assessment tool through tests on small volume (5,000-125,000 m3) dilute lahars (consistent with application of Manning's law). Dominant flow paths were mapped, and for the first time in this study area, velocities (magnitudes and spatial distribution) and average travel times were estimated for a range of lahar volumes. Flow depth approximations were also made using (modified) LAHARZ, and these refined the input to Manning's formula. Flow depths were verified within an order of magnitude by field observations, and velocity predictions were broadly consistent with proxy measurements and published data. Forecasts from this coupled method can operate on short to mid-term timescales for hazard management. The methodology has potential to provide a rapid preliminary hazard assessment in similar systems where data acquisition may be difficult.

  12. Study of Ratio of Proton Momentum Distributions with a Chiral Quark Model

    Institute of Scientific and Technical Information of China (English)

    LIU Jian; DONG Yu-Bing

    2005-01-01

    The ratio between the anomalous magnetic moments of proton and neutron has recently been suggested to be connected to the ratio of proton momentum fractions carried by the valence quarks inside it. This moment fraction ratio is respectively evaluated by using constituent quark model and chiral quark model in order to check meson cloud effect. Our results show that the meson cloud effect is remarkable to the ratio of the proton momentum fractions, and therefore, this ratiois a sensitive test for the meson cloud effect as well as for the SU(6) symmetry breaking effect.

  13. Earthquake catalogs for the 2017 Central and Eastern U.S. short-term seismic hazard model

    Science.gov (United States)

    Mueller, Charles S.

    2017-01-01

    The U. S. Geological Survey (USGS) makes long-term seismic hazard forecasts that are used in building codes. The hazard models usually consider only natural seismicity; non-tectonic (man-made) earthquakes are excluded because they are transitory or too small. In the past decade, however, thousands of earthquakes related to underground fluid injection have occurred in the central and eastern U.S. (CEUS), and some have caused damage.  In response, the USGS is now also making short-term forecasts that account for the hazard from these induced earthquakes. Seismicity statistics are analyzed to develop recurrence models, accounting for catalog completeness. In the USGS hazard modeling methodology, earthquakes are counted on a map grid, recurrence models are applied to estimate the rates of future earthquakes in each grid cell, and these rates are combined with maximum-magnitude models and ground-motion models to compute the hazard The USGS published a forecast for the years 2016 and 2017.Here, we document the development of the seismicity catalogs for the 2017 CEUS short-term hazard model.  A uniform earthquake catalog is assembled by combining and winnowing pre-existing source catalogs. The initial, final, and supporting earthquake catalogs are made available here.

  14. Biases in modeled surface snow BC mixing ratios in prescribed aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    A series of recent studies have used prescribed aerosol deposition flux fields in climate model runs to assess forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we use a series of offline calculations to show that this approach results, on average, in a~factor of about 1.5–2.5 high bias in annual-mean surface snow BC mixing ratios in three ...

  15. Web-based Services for Earth Observing and Model Data in National Applications and Hazards

    Science.gov (United States)

    Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.

    2005-12-01

    The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid

  16. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    Science.gov (United States)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an

  17. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  18. Hazard-consistent ground motions generated with a stochastic fault-rupture model

    Energy Technology Data Exchange (ETDEWEB)

    Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-12-15

    Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to

  19. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  20. Immiscible multicomponent lattice Boltzmann model for fluids with high relaxation time ratio

    Indian Academy of Sciences (India)

    Tao Jiang; Qiwei Gong; Ruofan Qiu; Anlin Wang

    2014-10-01

    An immiscible multicomponent lattice Boltzmann model is developed for fluids with high relaxation time ratios, which is based on the model proposed by Shan and Chen (SC). In the SC model, an interaction potential between particles is incorporated into the discrete lattice Boltzmann equation through the equilibrium velocity. Compared to the SC model, external forces in our model are discretized directly into the discrete lattice Boltzmann equation, as proposed by Guo et al. We develop it into a new multicomponent lattice Boltzmann (LB) model which has the ability to simulate immiscible multicomponent fluids with relaxation time ratio as large as 29.0 and to reduce `spurious velocity’. In this work, the improved model is validated and studied using the central bubble case and the rising bubble case. It finds good applications in both static and dynamic cases for multicomponent simulations with different relaxation time ratios.

  1. A "mental models" approach to the communication of subsurface hydrology and hazards

    Science.gov (United States)

    Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison

    2016-05-01

    Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.

  2. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    Science.gov (United States)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  3. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    Science.gov (United States)

    Al-Abadi, Alaa M.

    2017-05-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  4. Elasto-Plasticity Critical Corrosive Ratio Model for RC Structure Corrosive Expanding Crack

    Institute of Scientific and Technical Information of China (English)

    CHEN Yueshun; LU Yiyan; LIU Li

    2007-01-01

    The parameter of filling expanding ratio n, plasticity factor k1 and deformation parameter k2 is raised, and then the elasto-plasticity critical corrosive ratio model for RC structure corrosive expanding crack based on elasto-plasticity theory is constructed in this paper. The influences of parameters such as filling expansion ratio n, plasticity factor k1, deformation parameter k2, Poisson ratio of concrete v, diameter of reinforced bar d and protective layer thickness c on the critical corrosive ratio are researched by theory analysis and experiments. The experimental results validate the accuracy of the model. According to the experimental study, the least squares solution is calculated as n=1.8,k1 =0.61,k2 =0.5.

  5. Estimating the Tradeoff Between Risk Protection and Moral Hazard with a Nonlinear Budget Set Model of Health Insurance*

    OpenAIRE

    Amanda E. Kowalski

    2015-01-01

    Insurance induces a well-known tradeoff between the welfare gains from risk protection and the welfare losses from moral hazard. Empirical work traditionally estimates each side of the tradeoff separately, potentially yielding mutually inconsistent results. I develop a nonlinear budget set model of health insurance that allows for the calculation of both sides of the tradeoff simultaneously, allowing for a relationship between moral hazard and risk protection. An important feature of this mod...

  6. Application of a Data Mining Model and It's Cross Application for Landslide Hazard Analysis: a Case Study in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet; Lee, Saro; Shattri, Mansor

    This paper deals with landslide hazard analysis and cross-application using Geographic Information System (GIS) and remote sensing data for Cameron Highland, Penang Island and Selangor in Malaysia. The aim of this study was to cross-apply and verify a spatial probabilistic model for landslide hazard analysis. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical/geological data and satellite images were collected and processed using GIS and image processing tools. There are ten landslide inducing parameters which are considered for the landslide hazard analysis. These parameters are topographic slope, aspect, curvature and distance from drainage, all derived from the topographic database; geology and distance from lineament, derived from the geologic database; landuse from Landsat satellite images; soil from the soil database; precipitation amount, derived from the rainfall database; and the vegetation index value from SPOT satellite images. These factors were analyzed using an artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide hazard maps were drawn for these three areas using artificial neural network model derived not only from the data for that area but also using the weight for each parameters, one of the statistical model, calculated from each of the other two areas (nine maps in all) as a cross-check of the validity of the method. For verification, the results of the analyses were compared, in each study area, with actual landslide locations. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.

  7. Quantification of Inter-Tsunami Model Variability for Hazard Assessment Studies

    Science.gov (United States)

    Catalan, P. A.; Alcantar, A.; Cortés, P. I.

    2014-12-01

    There is a wide range of numerical models capable of modeling tsunamis, most of which have been properly validated and verified against standard benchmark cases and particular field or laboratory cases studies. Consequently, these models are regularly used as essential tools in estimating the tsunami hazard on coastal communities by scientists, or consulting companies, and treating model results in a deterministic way. Most of these models are derived from the same set of equations, typically the Non Linear Shallow Water Equations, to which ad-hoc terms are added to include physical effects such as friction, the Coriolis force, and others. However, not very often these models are used in unison to address the variability in the results. Therefore, in this contribution, we perform a high number of simulations using a set of numerical models and quantify the variability in the results. In order to reduce the influence of input data on the results, a single tsunami scenario is used over a common bathymetry. Next, we perform model comparisons as to asses sensitivity to changes in grid resolution and Manning roughness coefficients. Results are presented either as intra-model comparisons (sensitivity to changes using the same model) and inter-model comparisons (sensitivity to changing models). For the case tested, it was observed that most models reproduced fairly consistently the arrival and periodicity of the tsunami waves. However, variations in amplitude, characterized by the standard-deviation between model runs, could be as large as the mean signal. This level of variability is considered too large for deterministic assessment, reinforcing the idea that uncertainty needs to be included in such studies.

  8. Socio-economic vulnerability to natural hazards - proposal for an indicator-based model

    Science.gov (United States)

    Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.

    2012-04-01

    Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually

  9. A Quasi-Poisson Approach on Modeling Accident Hazard Index for Urban Road Segments

    Directory of Open Access Journals (Sweden)

    Lu Ma

    2014-01-01

    Full Text Available In light of the recently emphasized studies on risk evaluation of crashes, accident counts under specific transportation facilities are adopted to reflect the chance of crash occurrence. The current study introduces more comprehensive measure with the supplement information of accidental harmfulness into the expression of accident risks which are also named Accident Hazard Index (AHI in the following context. Before the statistical analysis, datasets from various sources are integrated under a GIS platform and the corresponding procedures are presented as an illustrated example for similar analysis. Then, a quasi-Poisson regression model is suggested for analyses and the results show that the model is appropriate for dealing with overdispersed count data and several key explanatory variables were found to have significant impact on the estimation of AHI. In addition, the effect of weight on different severity levels of accidents is examined and the selection of the weight is also discussed.

  10. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2006-04-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  11. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2007-08-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  12. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    offered by lifelong habits. Consumers found it utterly unconvincing that, all of a sudden, they should regard their everyday foods as toxic and therefore it might not be possible to effectively communicate the health benefits of some novel foods to consumers. Several misconceptions became apparent......Novel foods have been the object of intense public debate in recent years. Despite efforts to communicate the outcomes of risk assessments to consumers, public confidence in the management of potential risks associated has been low. Various reasons behind this has identified, chiefly a disagreement...... and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...

  13. A Gis Model Application Supporting The Analysis of The Seismic Hazard For The Urban Area of Catania (italy)

    Science.gov (United States)

    Grasso, S.; Maugeri, M.

    rigorous complex methods of analysis or qualitative procedures. A semi quantitative procedure based on the definition of the geotechnical hazard index has been applied for the zonation of the seismic geotechnical hazard of the city of Catania. In particular this procedure has been applied to define the influence of geotechnical properties of soil in a central area of the city of Catania, where some historical buildings of great importance are sited. It was also performed an investigation based on the inspection of more than one hundred historical ecclesiastical buildings of great importance, located in the city. Then, in order to identify the amplification effects due to the site conditions, a geotechnical survey form was prepared, to allow a semi quantitative evaluation of the seismic geotechnical hazard for all these historical buildings. In addition, to evaluate the foundation soil time -history response, a 1-D dynamic soil model was employed for all these buildings, considering the non linearity of soil behaviour. Using a GIS, a map of the seismic geotechnical hazard, of the liquefaction hazard and a preliminary map of the seismic hazard for the city of Catania have been obtained. From the analysis of obtained results it may be noticed that high hazard zones are mainly clayey sites

  14. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models.

    Science.gov (United States)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-11-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results

  15. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models

    Directory of Open Access Journals (Sweden)

    J. A. Álvarez-Gómez

    2013-05-01

    Full Text Available El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences – finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained

  16. Issues in testing the new national seismic hazard model for Italy

    Science.gov (United States)

    Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.

    2016-12-01

    It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works

  17. Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain

    Science.gov (United States)

    Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.

    2015-04-01

    were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.

  18. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    Science.gov (United States)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  19. The Hazard Analysis and Critical Control Points (HACCP) generic model for the production of Thai fermented pork sausage (Nham).

    Science.gov (United States)

    Paukatong, K V; Kunawasen, S

    2001-01-01

    Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.

  20. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    Science.gov (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  1. Spatio-Temporal Risk Assessment Process Modeling for Urban Hazard Events in Sensor Web Environment

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-11-01

    Full Text Available Immediate risk assessment and analysis are crucial in managing urban hazard events (UHEs. However, it is a challenge to develop an immediate risk assessment process (RAP that can integrate distributed sensors and data to determine the uncertain model parameters of facilities, environments, and populations. To solve this problem, this paper proposes a RAP modeling method within a unified spatio-temporal framework and forms a 10-tuple process information description structure based on a Meta-Object Facility (MOF. A RAP is designed as an abstract RAP chain that collects urban information resources and performs immediate risk assessments. In addition, we propose a prototype system known as Risk Assessment Process Management (RAPM to achieve the functions of RAP modeling, management, execution and visualization. An urban gas leakage event is simulated as an example in which individual risk and social risk are used to illustrate the applicability of the RAP modeling method based on the 10-tuple metadata framework. The experimental results show that the proposed RAP immediately assesses risk by the aggregation of urban sensors, data, and model resources. Moreover, an extension mechanism is introduced in the spatio-temporal RAP modeling method to assess risk and to provide decision-making support for different UHEs.

  2. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Ned; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  3. Analysis of two-phase sampling data with semiparametric additive hazards models.

    Science.gov (United States)

    Sun, Yanqing; Qian, Xiyuan; Shou, Qiong; Gilbert, Peter B

    2017-07-01

    Under the case-cohort design introduced by Prentice (Biometrica 73:1-11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.

  4. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  5. Weather modeling for hazard and consequence assessment operations during the 2006 Winter Olympic Games

    Science.gov (United States)

    Hayes, P.; Trigg, J. L.; Stauffer, D.; Hunter, G.; McQueen, J.

    2006-05-01

    Consequence assessment (CA) operations are those processes that attempt to mitigate negative impacts of incidents involving hazardous materials such as chemical, biological, radiological, nuclear, and high explosive (CBRNE) agents, facilities, weapons, or transportation. Incident types range from accidental spillage of chemicals at/en route to/from a manufacturing plant, to the deliberate use of radiological or chemical material as a weapon in a crowded city. The impacts of these incidents are highly variable, from little or no impact to catastrophic loss of life and property. Local and regional scale atmospheric conditions strongly influence atmospheric transport and dispersion processes in the boundary layer, and the extent and scope of the spread of dangerous materials in the lower levels of the atmosphere. Therefore, CA personnel charged with managing the consequences of CBRNE incidents must have detailed knowledge of current and future weather conditions to accurately model potential effects. A meteorology team was established at the U.S. Defense Threat Reduction Agency (DTRA) to provide weather support to CA personnel operating DTRA's CA tools, such as the Hazard Prediction and Assessment Capability (HPAC) tool. The meteorology team performs three main functions: 1) regular provision of meteorological data for use by personnel using HPAC, 2) determination of the best performing medium-range model forecast for the 12 - 48 hour timeframe and 3) provision of real-time help-desk support to users regarding acquisition and use of weather in HPAC CA applications. The normal meteorology team operations were expanded during a recent modeling project which took place during the 2006 Winter Olympic Games. The meteorology team took advantage of special weather observation datasets available in the domain of the Winter Olympic venues and undertook a project to improve weather modeling at high resolution. The varied and complex terrain provided a special challenge to the

  6. ZNJPrice/Earnings Ratio Model through Dividend Yield and Required Yield Above Expected Inflation

    Directory of Open Access Journals (Sweden)

    Emil Mihalina

    2010-07-01

    Full Text Available Price/earnings ratio is the most popular and most widespread evaluation model used to assess relative capital asset value on financial markets. In functional terms, company earnings in the very long term can be described with high significance. Empirically, it is visible from long-term statistics that the demanded (required yield on capital markets has certain regularity. Thus, investors first require a yield above the stable inflation rate and then a dividend yield and a capital increase caused by the growth of earnings that influence the price, with the assumption that the P/E ratio is stable. By combining the Gordon model for current dividend value, the model of market capitalization of earnings (price/earnings ratio and bearing in mind the influence of the general price levels on company earnings, it is possible to adjust the price/earnings ratio by deriving a function of the required yield on capital markets measured by a market index through dividend yield and inflation rate above the stable inflation rate increased by profit growth. The S&P 500 index for example, has in the last 100 years grown by exactly the inflation rate above the stable inflation rate increased by profit growth. The comparison of two series of price/earnings ratios, a modelled one and an average 7-year ratio, shows a notable correlation in the movement of two series of variables, with a three year deviation. Therefore, it could be hypothesized that three years of the expected inflation level, dividend yield and profit growth rate of the market index are discounted in the current market prices. The conclusion is that, at the present time, the relationship between the adjusted average price/earnings ratio and its effect on the market index on one hand and the modelled price/earnings ratio on the other can clearly show the expected dynamics and course in the following period.

  7. Challenges in understanding, modelling, and mitigating Lake Outburst Flood Hazard: experiences from Central Asia

    Science.gov (United States)

    Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.

    2010-05-01

    Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also

  8. Relationship Model Between Nightlight Data and Floor Area Ratio from High Resolution Images

    Science.gov (United States)

    Yan, M.; Xu, L.

    2017-09-01

    It is a hotpot that extraction the floor area ratio from high resolution remote sensing images. It is a development trend of using nightlight data to survey the urban social and economic information. This document aims to provide a conference relationship model for VIIRS/NPP nightlight data and floor Area Ratio from High Resolution ZY-3 Images. It shows that there is a lineal relationship between the shadow and the floor area ratio, and the R2 is 0.98. It shows that there is a quadratic polynomial relationship between the floor area ratio and the nightlight, and the R2 is 0.611. We can get a conclusion that, VIIRS/NPP nightlights data may show the floor area ratio in an extent at level of administrative street.

  9. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  10. Poisson׳s ratio of arterial wall - Inconsistency of constitutive models with experimental data.

    Science.gov (United States)

    Skacel, Pavel; Bursa, Jiri

    2016-02-01

    Poisson׳s ratio of fibrous soft tissues is analyzed in this paper on the basis of constitutive models and experimental data. Three different up-to-date constitutive models accounting for the dispersion of fibre orientations are analyzed. Their predictions of the anisotropic Poisson׳s ratios are investigated under finite strain conditions together with the effects of specific orientation distribution functions and of other parameters. The applied constitutive models predict the tendency to lower (or even negative) out-of-plane Poisson׳s ratio. New experimental data of porcine arterial layer under uniaxial tension in orthogonal directions are also presented and compared with the theoretical predictions and other literature data. The results point out the typical features of recent constitutive models with fibres concentrated in circumferential-axial plane of arterial layers and their potential inconsistence with some experimental data. The volumetric (in)compressibility of arterial tissues is also discussed as an eventual and significant factor influencing this inconsistency.

  11. Constant Latent Odds-Ratios Models and the Mantel-Haenszel Null Hypothesis

    Science.gov (United States)

    Hessen, David J.

    2005-01-01

    In the present paper, a new family of item response theory (IRT) models for dichotomous item scores is proposed. Two basic assumptions define the most general model of this family. The first assumption is local independence of the item scores given a unidimensional latent trait. The second assumption is that the odds-ratios for all item-pairs are…

  12. Three-dimensional displays for natural hazards analysis, using classified Landsat Thematic Mapper digital data and large-scale digital elevation models

    Science.gov (United States)

    Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.

    1991-01-01

    Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.

  13. River Loire levees hazard studies – CARDigues’ model principles and utilization examples on Blois levees

    Directory of Open Access Journals (Sweden)

    Durand Eduard

    2016-01-01

    Full Text Available Along the river Loire, in order to have a homogenous method to do specific risk assessment studies, a new model named CARDigues (for Levee Breach Hazard Calculation was developed in a partnership with DREAL Centre-Val de Loire (owner of levees, Cerema and Irstea. This model enables to approach the probability of failure on every levee sections and to integrate and cross different “stability” parameters such topography and included structures, geology and material geotechnical characteristics, hydraulic loads… and observations of visual inspections or instrumentation results considered as disorders (seepage, burrowing animals, vegetation, pipes, etc.. This model and integrated tool CARDigues enables to check for each levee section, the probability of appearance and rupture of five breaching scenarios initiated by: overflowing, internal erosion, slope instability, external erosion and uplift. It has been recently updated and has been applied on several levee systems by different contractors. The article presents the CARDigues model principles and its recent developments (version V28.00 with examples on river Loire and how it is currently used for a relevant and global levee system diagnosis and assessment. Levee reinforcement or improvement management is also a perspective of applications for this model CARDigues.

  14. Historical support for a mixed law Lanchestrian Attrition Model: Helmbold's ratio

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.

    1989-11-01

    This is the first in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. Helmbold defined the activity ratio'' to be the ratio of the Lanchester coefficients in the pair of differential equations of the Lanchester square law of attrition. He derived an equivalence between this ratio and a ratio containing the initial and ending force sizes, herein called the Helmbold ratio, and demonstrated a relationship between the Helmbold ratio and the initial force ratio in a large number of historical battles. This paper reexamines the implications of this relationship and concludes that its existence, rather than being supportive of the Lanchester square law, is supportive of a mixed law lying between the Lanchester linear law and a Lanchester logarithmic law. It is shown that the Helmbold relationship can discriminate between several attrition formulations; however, while this is a necessary condition, it is not sufficient to conclude that data fitting the relationship were caused by a given attrition formulation. The conclusion is that the data are not fine enough to determine the differential form of the attrition equations but do lead to a statistical statement about the outcomes of battles. 8 refs., 51 figs., 8 tabs.

  15. Risk assessment framework of fate and transport models applied to hazardous waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, S.T.

    1993-06-01

    Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary.

  16. Modelling the impacts of coastal hazards on land-use development

    Science.gov (United States)

    Ramirez, J.; Vafeidis, A. T.

    2009-04-01

    Approximately 10% of the world's population live in close proximity to the coast and are potentially susceptible to tropical or extra-tropical storm-surge events. These events will be exacerbated by projected sea-level rise (SLR) in the 21st century. Accelerated SLR is one of the more certain impacts of global warming and can have major effects on humans and ecosystems. Of particular vulnerability are densely populated coastal urban centres containing globally important commercial resources, with assets in the billions USD. Moreover, the rates of growth of coastal populations, which are reported to be growing faster than the global means, are leading to increased human exposure to coastal hazards. Consequently, potential impacts of coastal hazards can be significant in the future and will depend on various factors but actual impacts can be considerably reduced by appropriate human decisions on coastal land-use management. At the regional scale, it is therefore necessary to identify which coastal areas are vulnerable to these events and explore potential long-term responses reflected in land usage. Land-use change modelling is a technique which has been extensively used in recent years for studying the processes and mechanisms that govern the evolution of land use and which can potentially provide valuable information related to the future coastal development of regions that are vulnerable to physical forcings. Although studies have utilized land-use classification maps to determine the impact of sea-level rise, few use land-use projections to make these assessments, and none have considered adaptive behaviour of coastal dwellers exposed to hazards. In this study a land-use change model, which is based on artificial neural networks (ANN), was employed for predicting coastal urban and agricultural development. The model uses as inputs a series of spatial layers, which include information on population distribution, transportation networks, existing urban centres, and

  17. Developing Sustainable Modeling Software and Necessary Data Repository for Volcanic Hazard Analysis -- Some Lessons Learnt

    Science.gov (United States)

    Patra, A. K.; Connor, C.; Webley, P.; Jones, M.; Charbonnier, S. J.; Connor, L.; Gallo, S.; Bursik, M. I.; Valentine, G.; Hughes, C. G.; Aghakhani, H.; Renschler, C. S.; Kosar, T.

    2014-12-01

    We report here on an effort to improve the sustainability, robustness and usability of the core modeling and simulation tools housed in the collaboratory VHub.org and used in the study of complex volcanic behavior. In particular, we focus on tools that support large scale mass flows (TITAN2D), ash deposition/transport and dispersal (Tephra2 and PUFF), and lava flows (Lava2). These tools have become very popular in the community especially due to the availability of an online usage modality. The redevelopment of the tools ot take advantage of new hardware and software advances was a primary thrust for the effort. However, as we start work we have reoriented the effort to also take advantage of significant new opportunities for supporting the complex workflows and use of distributed data resources that will enable effective and efficient hazard analysis.

  18. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  19. Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera

    CERN Document Server

    Bevilacqua, Andrea

    2016-01-01

    This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.

  20. Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation

    Science.gov (United States)

    Borga, M.; Creutin, J. D.

    Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two

  1. Hazardous Waste

    Science.gov (United States)

    ... you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint ...

  2. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  3. Distribution modeling of hazardous airborne emissions from industrial campuses in Iraq via GIS techniques

    Science.gov (United States)

    Salwan Al-Hasnawi, S.; Salam Bash AlMaliki, J.; Falih Nazal, Zainab

    2017-08-01

    The presence of considerable amounts of hazardous elements in air may represent prolonged lethal effects for the residential and/or commercial campuses and activities, especially those around the emission activities, hence it is so important to monitor and anticipate these concentrations and design an effective spatial forecasting models for that sake. Geographic information systems GIS were utilized to monitor, analyze and model the presence and concentrations for airborne Pb, Cr, and Zn elements in the atmosphere around certain industrial campuses at the northern part of Iraq. Diffusion patterns were determined for these elements via the adaptation of GIS extension; the geostatistical and spatial analysis that implement Kriging and inverse distance weighted (IDW) methods to interpolate a raster surface. The main determination factors like wind speed, ambient temperature and topographic distributions were considered in order to design a prediction model that serves as early alert of future possible accidents. Results of eight months observation program have proved that the concentrations of the three elements had significantly exceeded the Iraqi and WHO limits at most of the observed locations especially for summer times. Also, the predicted models were validated with the field measures and have proved close match especially for the geostatistical analysis map that had around 4% percentage error for the three tested elements.

  4. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    Science.gov (United States)

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  5. Simulated hazards of loosing infection-free status in a Dutch BHV1 model.

    Science.gov (United States)

    Vonk Noordegraaf, A; Labrovic, A; Frankena, K; Pfeiffer, D U; Nielen, M

    2004-01-30

    A compulsory eradication programme for bovine herpesvirus 1 (BHV1) was implemented in the Netherlands in 1998. At the start of the programme, about 25% of the dairy herds were certified BHV1-free. Simulation models have played an important role in the decision-making process associated with BHV1 eradication. Our objective in this study was to improve understanding of model behaviour (as part of internal validation) regarding loss by herds of the BHV1-free certificate. Using a Cox proportional hazards model, the association between farm characteristics and the risk of certificate loss during simulation was quantified. The overall fraction of herds experiencing certificate loss amongst initially certified during simulation was 3.0% in 6.5 years. Factors that increased risk for earlier certificate loss in the final multivariable Cox model were higher 'yearly number of cattle purchased', 'farm density within a 1 km radius' and 'cattle density within a 1 km radius'. Qualitative behaviour of risk factors we found agreed with observations in field studies.

  6. Extension of the dielectric breakdown model for simulation of viscous fingering at finite viscosity ratios.

    Science.gov (United States)

    Doorwar, Shashvat; Mohanty, Kishore K

    2014-07-01

    Immiscible displacement of viscous oil by water in a petroleum reservoir is often hydrodynamically unstable. Due to similarities between the physics of dielectric breakdown and immiscible flow in porous media, we extend the existing dielectric breakdown model to simulate viscous fingering patterns for a wide range of viscosity ratios (μ(r)). At low values of power-law index η, the system behaves like a stable Eden growth model and as the value of η is increased to unity, diffusion limited aggregation-like fractals appear. This model is compared with our two-dimensional (2D) experiments to develop a correlation between the viscosity ratio and the power index, i.e., η = 10(-5)μ(r)(0.8775). The 2D and three-dimensional (3D) simulation data appear scalable. The fingering pattern in 3D simulations at finite viscosity ratios appear qualitatively similar to the few experimental results published in the literature.

  7. Aspect Ratio Model for Radiation-Tolerant Dummy Gate-Assisted n-MOSFET Layout.

    Science.gov (United States)

    Lee, Min Su; Lee, Hee Chul

    2014-01-01

    In order to acquire radiation-tolerant characteristics in integrated circuits, a dummy gate-assisted n-type metal oxide semiconductor field effect transistor (DGA n-MOSFET) layout was adopted. The DGA n-MOSFET has a different channel shape compared with the standard n-MOSFET. The standard n-MOSFET has a rectangular channel shape, whereas the DGA n-MOSFET has an extended rectangular shape at the edge of the source and drain, which affects its aspect ratio. In order to increase its practical use, a new aspect ratio model is proposed for the DGA n-MOSFET and this model is evaluated through three-dimensional simulations and measurements of the fabricated devices. The proposed aspect ratio model for the DGA n-MOSFET exhibits good agreement with the simulation and measurement results.

  8. Theoretical quasar emission-line ratios. VII - Energy-balance models for finite hydrogen slabs

    Science.gov (United States)

    Hubbard, E. N.; Puetter, R. C.

    1985-01-01

    The present energy balance calculations for finite, isobaric, hydrogen-slab quasar emission line clouds incorporate probabilistic radiative transfer (RT) in all lines and bound-free continua of a five-level continuum model hydrogen atom. Attention is given to the line ratios, line formation regions, level populations and model applicability results obtained. H lines and a variety of other considerations suggest the possibility of emission line cloud densities in excess of 10 to the 10th/cu cm. Lyman-beta/Lyman-alpha line ratios that are in agreement with observed values are obtained by the models. The observed Lyman/Balmer ratios can be achieved with clouds whose column depths are about 10 to the 22nd/sq cm.

  9. On the Elastic Vibration Model for High Length-Diameter Ratio Rocket with Attitude Control System

    Institute of Scientific and Technical Information of China (English)

    朱伯立; 杨树兴

    2003-01-01

    An elastic vibration model for high length-diameter ratio spinning rocket with attitude control system which can be used for trajectory simulation is established. The basic theory of elastic dynamics and vibration dynamics were both used to set up the elastic vibration model of rocket body. In order to study the problem more conveniently, the rocket's body was simplified to be an even beam with two free ends. The model was validated by simulation results and the test data.

  10. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    Energy Technology Data Exchange (ETDEWEB)

    Margraf, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  11. AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.

    Science.gov (United States)

    Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie

    2015-04-01

    Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.

  12. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  13. Observations and modelling of line intensity ratios of OV multiplet lines for ? - ?

    Science.gov (United States)

    Kato, T.; Rachlew-Källne, E.; Hörling, P.; Zastrow, K.-D.

    1996-09-01

    Line intensity ratios of OV multiplet lines for the 0953-4075/29/18/019/img3 (J = 2,1,0) transitions are studied using a collisional radiative model and the results are compared with measurements from the reversed field pinch experiments Extrap T1 and T2 at KTH. The measured line intensity ratios deviate from the predictions of the model and the possible causes for the discrepancy are discussed with regard to errors in rate coefficients and non-quasi-steady state.

  14. Baryon Magnetic Moment and Beta Decay Ratio in Colored Quark Cluster Model

    Institute of Scientific and Technical Information of China (English)

    HU Zheng-Feng; WANG Qing-Wu; DENG Jian-Liao; LEE Xi-Guo; DU Chun-Guang; WANG Yu-Zhu

    2008-01-01

    Baryon magnetic moments of p, n, ∑+, ∑-, 0, - and the beta decay ratios (GA/GV) of n → p, ∑- → n and 0 →∑+are calculated in a colored quark cluster model. With SU(3) breaking, the model gives a good fit to the experimental values of those baryon magnetic moments and the beta decay ratios. Our results show that the orbital motion has a significant contribution to the spin and magnetic moments of those baryons and the strange component in nucleon is small.

  15. Integrating GIS with AHP and Fuzzy Logic to generate hand, foot and mouth disease hazard zonation (HFMD-HZ) model in Thailand

    Science.gov (United States)

    Samphutthanon, R.; Tripathi, N. K.; Ninsawat, S.; Duboz, R.

    2014-12-01

    The main objective of this research was the development of an HFMD hazard zonation (HFMD-HZ) model by applying AHP and Fuzzy Logic AHP methodologies for weighting each spatial factor such as disease incidence, socio-economic and physical factors. The outputs of AHP and FAHP were input into a Geographic Information Systems (GIS) process for spatial analysis. 14 criteria were selected for analysis as important factors: disease incidence over 10 years from 2003 to 2012, population density, road density, land use and physical features. The results showed a consistency ratio (CR) value for these main criteria of 0.075427 for AHP, the CR for FAHP results was 0.092436. As both remained below the threshold of 0.1, the CR value were acceptable. After linking to actual geospatial data (disease incidence 2013) through spatial analysis by GIS for validation, the results of the FAHP approach were found to match more accurately than those of the AHP approach. The zones with the highest hazard of HFMD outbreaks were located in two main areas in central Muang Chiang Mai district including suburbs and Muang Chiang Rai district including the vicinity. The produced hazardous maps may be useful for organizing HFMD protection plans.

  16. Hazard Mapping of Structurally Controlled Landslide in Southern Leyte, Philippines Using High Resolution Digital Elevation Model

    Science.gov (United States)

    Luzon, Paul Kenneth; Rochelle Montalbo, Kristina; Mahar Francisco Lagmay, Alfredo

    2014-05-01

    The 2006 Guinsaugon landslide in St. Bernard, Southern Leyte is the largest known mass movement of soil in the Philippines. It consisted of a 15 million m3 rockslide-debris avalanche from an approximately 700 m high escarpment produced by continuous movement of the Philippine fault at approximately 2.5 cm/year. The landslide was preceded by continuous heavy rainfall totaling 571.2 mm from February 8 to 12, 2006. The catastrophic landslide killed more than 1,000 people and displaced 19,000 residents over its 6,400 km path. To investigate the present-day morphology of the scar and potential failure that may occur, an analysis of a high-resolution digital elevation model (10 m resolution Synthetic Aperture Radar images in 2013) was conducted, leading to the generation of a structurally controlled landslide hazard map of the area. Discontinuity sets that could contribute to any failure mechanism were identified using Coltop 3D software which uses a unique lower Schmidt-Lambert color scheme for any given dip and dip direction. Thus, finding main morpho-structural orientations became easier. Matterocking, a software designed for structural analysis, was used to generate possible planes that could slide due to the identified discontinuity sets. Conefall was then utilized to compute the extent to which the rock mass will run out. The results showed potential instabilities in the scarp area of the 2006 Guinsaguon landslide and in adjacent slopes because of the presence of steep discontinuities that range from 45-60°. Apart from the 2006 Guinsaugon potential landslides, conefall simulation generated farther rock mass extent in adjacent slopes. In conclusion, there is a high probability of landslides in the municipality of St. Bernard Leyte, where the 2006 Guinsaugon Landslide occurred. Concerned agencies may use maps produced from this study for disaster preparedness and to facilitate long-term recovery planning for hazardous areas.

  17. Cellular parameters for track structure modelling of radiation hazard in space

    Science.gov (United States)

    Hollmark, M.; Lind, B.; Gudowska, I.; Waligorski, M.

    Based on irradiation with 45 MeV/u N and B ions and with Co-60 gamma rays, track structure cellular parameters have been fitted for V 79-379A Chinese hamster lung fibroblasts and for human melanoma cells (AA wtp53). These sets of parameters will be used to develop a calculation of radiation hazard in deep space, based on the system for evaluating, summing and reporting occupational exposures proposed in 1967 by subcommittee of the NCRP, but never issued as an NCRP report. The key concepts of this system were: i) expression of the risk from all radiation exposures relative to that from a whole-body exposure to Co-60 radiation; ii) relating the risk from any exposure to that of the standard (Co-60) radiation through an "effectiveness factor" (ef), a product of sub-factors representing radiation quality, body region irradiated, and depth of penetration of radiation; the product of absorbed dose by ef being termed the "exposure record unit" (eru); iii) development of ef values and a cumulative eru record for external and internal emitters. Application of this concept should provide a better description of the Gy -equivalent presently in use by NASA for evaluating risk in deep space than the equivalent dose, following ICRP-60 recommendations. Dose and charged particle fluence levels encountered in space, particularly after Solar Particle Events, require that deterministic rather than stochastic effects be considered. Also, synergistic effects due to simultaneous multiple charged particle transfers, may have to be considered. Thus, models applicable in radiotherapy, where the Gy -equivalent is also applied, in conjunction with transport calculations performed using, e.g. the ADAM and EVA phantoms, along the concepts of the 1967 NCRP system, may be more appropriate for evaluating the radiation hazard from external fields with a large flux and a major high-LET component.

  18. Local models for rainstorm-induced hazard analysis on Mediterranean river-torrential geomorphological systems

    Directory of Open Access Journals (Sweden)

    N. Diodato

    2004-01-01

    Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.

  19. Flood Hazard Mapping using Hydraulic Model and GIS: A Case Study in Mandalay City, Myanmar

    Directory of Open Access Journals (Sweden)

    Kyu Kyu Sein

    2016-01-01

    Full Text Available This paper presents the use of flood frequency analysis integrating with 1D Hydraulic model (HECRAS and Geographic Information System (GIS to prepare flood hazard maps of different return periods in Ayeyarwady River at Mandalay City in Myanmar. Gumbel’s distribution was used to calculate the flood peak of different return periods, namely, 10 years, 20 years, 50 years, and 100 years. The flood peak from frequency analysis were input into HEC-RAS model to find the corresponding flood level and extents in the study area. The model results were used in integrating with ArcGIS to generate flood plain maps. Flood depths and extents have been identified through flood plain maps. Analysis of 100 years return period flood plain map indicated that 157.88 km2 with the percentage of 17.54% is likely to be inundated. The predicted flood depth ranges varies from greater than 0 to 24 m in the flood plains and on the river. The range between 3 to 5 m were identified in the urban area of Chanayetharzan, Patheingyi, and Amarapua Townships. The highest inundated area was 85 km2 in the Amarapura Township.

  20. On Compound Poisson Processes Arising in Change-Point Type Statistical Models as Limiting Likelihood Ratios

    CERN Document Server

    Dachian, Serguei

    2010-01-01

    Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In a previous paper of one of the authors it was established that one of these likelihood ratios, which is an exponential functional of a two-sided Poisson process driven by some parameter, can be approximated (for sufficiently small values of the parameter) by another one, which is an exponential functional of a two-sided Brownian motion. In this paper we consider yet another likelihood ratio, which is the exponent of a two-sided compound Poisson process driven by some parameter. We establish, that similarly to the Poisson type one, the compound Poisson type likelihood ratio can be approximated by the Brownian type one for sufficiently small values of the parameter. We equally discuss the asymptotics for large values of the parameter and illustrate the results by numerical simulations.

  1. An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects

    Science.gov (United States)

    Brown, Cliff

    2015-01-01

    An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.

  2. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  3. Modeling the soil water retention properties of same-textured soils with different initial void ratios

    Science.gov (United States)

    Tan, Fang; Zhou, Wan-Huan; Yuen, Ka-Veng

    2016-11-01

    This study presents a method of predicting the soil water retention curve (SWRC) of a soil using a set of measured SWRC data from a soil with the same texture but different initial void ratio. The relationships of the volumetric water contents and the matric suctions between two samples with different initial void ratios are established. An adjustment parameter (β) is introduced to express the relationships between the matric suctions of two soil samples. The parameter β is a function of the initial void ratio, matric suction or volumetric water content. The function can take different forms, resulting in different predictive models. The optimal predictive models of β are determined for coarse-grained and fine-grained soils using the Bayesian method. The optimal models of β are validated by comparing the estimated matric suction and measured data. The comparisons show that the proposed method produces more accurate SWRCs than do other models for both coarse-grained and fine-grained soils. Furthermore, the influence of the model parameters of β on the predicted matric suction and SWRC is evaluated using Latin Hypercube sampling. An uncertainty analysis shows that the reliability of the predicted SWRC decreases with decreasing water content in fine-grained soils, and the initial void ratio has no apparent influence on the reliability of the predicted SWRCs in coarse-grained and fine-grained soils.

  4. Dynamic Modeling of Hydraulic Power Steering System with Variable Ratio Rack and Pinion Gear

    Science.gov (United States)

    Zhang, Nong; Wang, Miao

    A comprehensive mathematical model of a typical hydraulic power steering system equipped with variable ratio rack and pinion gear is developed. The steering system’s dynamic characteristics are investigated and its forced vibrations are compared with those obtained from a counterpart system with a constant ratio rack and pinion gear. The modeling details of the mechanism subsystem, hydraulic supply lines subsystem and the rotary spool valve subsystem are provided and included in the integrated steering system model. The numerical simulations are conducted to investigate the dynamics of the nonlinear parametric steering system. From the comparison between simulated results and the experimental ones, it is shown that the model accurately integrates the boost characteristics of the rotary spool valve which is the key component of hydraulic power steering system. The variable ratio rack-pinion gear behaviors significantly differently from its constant ratio counterpart does. It significantly affects not only the system natural frequencies but also reduces vibrations under constant rate and ramp torque steering inputs. The developed steering model produces valid predictions of the system’s behavior and therfore could assist engineers in the design and analysis of integrated steering systems.

  5. Experiments using machine learning to approximate likelihood ratios for mixture models

    Science.gov (United States)

    Cranmer, K.; Pavez, J.; Louppe, G.; Brooks, W. K.

    2016-10-01

    Likelihood ratio tests are a key tool in many fields of science. In order to evaluate the likelihood ratio the likelihood function is needed. However, it is common in fields such as High Energy Physics to have complex simulations that describe the distribution while not having a description of the likelihood that can be directly evaluated. In this setting it is impossible or computationally expensive to evaluate the likelihood. It is, however, possible to construct an equivalent version of the likelihood ratio that can be evaluated by using discriminative classifiers. We show how this can be used to approximate the likelihood ratio when the underlying distribution is a weighted sum of probability distributions (e.g. signal plus background model). We demonstrate how the results can be considerably improved by decomposing the ratio and use a set of classifiers in a pairwise manner on the components of the mixture model and how this can be used to estimate the unknown coefficients of the model, such as the signal contribution.

  6. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)

    2015-06-15

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.

  7. Geographic risk modeling of childhood cancer relative to county-level crops, hazardous air pollutants and population density characteristics in Texas

    Directory of Open Access Journals (Sweden)

    Zhu Li

    2008-09-01

    Full Text Available Abstract Background Childhood cancer has been linked to a variety of environmental factors, including agricultural activities, industrial pollutants and population mixing, but etiologic studies have often been inconclusive or inconsistent when considering specific cancer types. More specific exposure assessments are needed. It would be helpful to optimize future studies to incorporate knowledge of high-risk locations or geographic risk patterns. The objective of this study was to evaluate potential geographic risk patterns in Texas accounting for the possibility that multiple cancers may have similar geographic risks patterns. Methods A spatio-temporal risk modeling approach was used, whereby 19 childhood cancer types were modeled as potentially correlated within county-years. The standard morbidity ratios were modeled as functions of intensive crop production, intensive release of hazardous air pollutants, population density, and rapid population growth. Results There was supportive evidence for elevated risks for germ cell tumors and "other" gliomas in areas of intense cropping and for hepatic tumors in areas of intense release of hazardous air pollutants. The risk for Hodgkin lymphoma appeared to be reduced in areas of rapidly growing population. Elevated spatial risks included four cancer histotypes, "other" leukemias, Central Nervous System (CNS embryonal tumors, CNS other gliomas and hepatic tumors with greater than 95% likelihood of elevated risks in at least one county. Conclusion The Bayesian implementation of the Multivariate Conditional Autoregressive model provided a flexible approach to the spatial modeling of multiple childhood cancer histotypes. The current study identified geographic factors supporting more focused studies of germ cell tumors and "other" gliomas in areas of intense cropping, hepatic cancer near Hazardous Air Pollutant (HAP release facilities and specific locations with increased risks for CNS embryonal tumors and

  8. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  9. Comparison of empirical and data driven hydrometeorological hazard models on coastal cities of São Paulo, Brazil

    Science.gov (United States)

    Koga-Vicente, A.; Friedel, M. J.

    2010-12-01

    Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.

  10. A forecasting and forewarning model for methane hazard in working face of coal mine based on LS-SVM

    Institute of Scientific and Technical Information of China (English)

    CAO Shu-gang; LIU Yan-bao; WANG Yan-ping

    2008-01-01

    To improve the precision and reliability in predicting methane hazard in working face of coal mine, we have proposed a forecasting and forewarning model for methane hazard based on the least square support vector (LS-SVM) multi-classifier and regression machine. For the forecasting model, the methane concentration can be considered as a nonlinear time series and the time series analysis method is adopted to predict the change in methane concentration using LS-SVM regression. For the forewarning model, which is based on the forecasting results, by the multi-classification method of LS-SVM, the methane hazard was identified to four grades: normal, attention, warning and danger. According to the forewarning results, corresponding measures are taken. The model was used to forecast and forewarn the K9 working face. The results obtained by LS-SVM regression show that the forecast- ing have a high precision and forewarning results based on a LS-SVM multi-classifier are credible. Therefore, it is an effective model building method for continuous prediction of methane concentration and hazard forewarning in working face.

  11. Estimation of financial loss ratio for E-insurance:a quantitative model

    Institute of Scientific and Technical Information of China (English)

    钟元生; 陈德人; 施敏华

    2002-01-01

    In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.

  12. Mating behavior, population growth, and the operational sex ratio: a periodic two-sex model approach.

    Science.gov (United States)

    Jenouvrier, Stéphanie; Caswell, Hal; Barbraud, Christophe; Weimerskirch, Henri

    2010-06-01

    We present a new approach to modeling two-sex populations, using periodic, nonlinear two-sex matrix models. The models project the population growth rate, the population structure, and any ratio of interest (e.g., operational sex ratio). The periodic formulation permits inclusion of highly seasonal behavioral events. A periodic product of the seasonal matrices describes annual population dynamics. The model is nonlinear because mating probability depends on the structure of the population. To study how the vital rates influence population growth rate, population structure, and operational sex ratio, we used sensitivity analysis of frequency-dependent nonlinear models. In nonlinear two-sex models the vital rates affect growth rate directly and also indirectly through effects on the population structure. The indirect effects can sometimes overwhelm the direct effects and are revealed only by nonlinear analysis. We find that the sensitivity of the population growth rate to female survival is negative for the emperor penguin, a species with highly seasonal breeding behavior. This result could not occur in linear models because changes in population structure have no effect on per capita reproduction. Our approach is applicable to ecological and evolutionary studies of any species in which males and females interact in a seasonal environment.

  13. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  14. Geospatial modeling of plant stable isotope ratios - the development of isoscapes

    Science.gov (United States)

    West, J. B.; Ehleringer, J. R.; Hurley, J. M.; Cerling, T. E.

    2007-12-01

    Large-scale spatial variation in stable isotope ratios can yield critical insights into the spatio-temporal dynamics of biogeochemical cycles, animal movements, and shifts in climate, as well as anthropogenic activities such as commerce, resource utilization, and forensic investigation. Interpreting these signals requires that we understand and model the variation. We report progress in our development of plant stable isotope ratio landscapes (isoscapes). Our approach utilizes a GIS, gridded datasets, a range of modeling approaches, and spatially distributed observations. We synthesize findings from four studies to illustrate the general utility of the approach, its ability to represent observed spatio-temporal variability in plant stable isotope ratios, and also outline some specific areas of uncertainty. We also address two basic, but critical questions central to our ability to model plant stable isotope ratios using this approach: 1. Do the continuous precipitation isotope ratio grids represent reasonable proxies for plant source water?, and 2. Do continuous climate grids (as is or modified) represent a reasonable proxy for the climate experienced by plants? Plant components modeled include leaf water, grape water (extracted from wine), bulk leaf material ( Cannabis sativa; marijuana), and seed oil ( Ricinus communis; castor bean). Our approaches to modeling the isotope ratios of these components varied from highly sophisticated process models to simple one-step fractionation models to regression approaches. The leaf water isosocapes were produced using steady-state models of enrichment and continuous grids of annual average precipitation isotope ratios and climate. These were compared to other modeling efforts, as well as a relatively sparse, but geographically distributed dataset from the literature. The latitudinal distributions and global averages compared favorably to other modeling efforts and the observational data compared well to model predictions

  15. Theoretical Model for Predicting Moisture Ratio during Drying of Spherical Particles in a Rotary Dryer

    Directory of Open Access Journals (Sweden)

    F. T. Ademiluyi

    2013-01-01

    Full Text Available A mathematical model was developed for predicting the drying kinetics of spherical particles in a rotary dryer. Drying experiments were carried out by drying fermented ground cassava particles in a bench scale rotary dryer at inlet air temperatures of 115–230°C, air velocities of 0.83 m/s–1.55 m/s, feed mass of 50–500 g, drum drive speed of 8 rpm, and feed drive speed of 100 rpm to validate the model. The data obtained from the experiments were used to calculate the experimental moisture ratio which compared well with the theoretical moisture ratio calculated from the newly developed Abowei-Ademiluyi model. The comparisons and correlations of the results indicate that validation and performance of the established model are rather reasonable.

  16. Selection of noise power ratio spectrum models for electronic measurement of the Boltzmann constant

    CERN Document Server

    Coakley, Kevin J

    2016-01-01

    In the electronic measurement of the Boltzmann constant based on Johnson noise thermometry, the ratio of the power spectral densities of thermal noise across a resistor and pseudo-random noise synthetically generated by a quantum-accurate voltage-noise source varies with frequency due to mismatch between transmission lines. We model this ratio spectrum as an even polynomial function of frequency. For any given frequency range, defined by the maximum frequency $f_{max}$, we select the optimal polynomial ratio spectrum model with a cross-validation method and estimate the conditional uncertainty of the constant term in the ratio spectrum model in a way that accounts for both random and systematic effects associated with imperfect knowledge of the model with a resampling method. We select $f_{max}$ by minimizing this conditional uncertainty. Since many values of $f_{max}$ yield conditional uncertainties close to the observed minimum value on a frequency grid, we quantify an additional component of uncertainty as...

  17. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2012-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally. The po

  18. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2015-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but as usual the asymptotic results do not require normally distr

  19. Reliability estimation and remaining useful lifetime prediction for bearing based on proportional hazard model

    Institute of Scientific and Technical Information of China (English)

    王鹭; 张利; 王学芝

    2015-01-01

    As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.

  20. Predictive models in hazard assessment of Great Lakes contaminants for fish

    Science.gov (United States)

    Passino, Dora R. May

    1986-01-01

    A hazard assessment scheme was developed and applied to predict potential harm to aquatic biota of nearly 500 organic compounds detected by gas chromatography/mass spectrometry (GC/MS) in Great Lakes fish. The frequency of occurrence and estimated concentrations of compounds found in lake trout (Salvelinus namaycush) and walleyes (Stizostedion vitreum vitreum) were compared with available manufacturing and discharge information. Bioconcentration potential of the compounds was estimated from available data or from calculations of quantitative structure-activity relationships (QSAR). Investigators at the National Fisheries Research Center-Great Lakes also measured the acute toxicity (48-h EC50's) of 35 representative compounds to Daphnia pulex and compared the results with acute toxicity values generated by QSAR. The QSAR-derived toxicities for several chemicals underestimated the actual acute toxicity by one or more orders of magnitude. A multiple regression of log EC50 on log water solubility and molecular volume proved to be a useful predictive model. Additional models providing insight into toxicity incorporate solvatochromic parameters that measure dipolarity/polarizability, hydrogen bond acceptor basicity, and hydrogen bond donor acidity of the solute (toxicant).

  1. Survival prediction based on compound covariate under Cox proportional hazard models.

    Directory of Open Access Journals (Sweden)

    Takeshi Emura

    Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.

  2. Modeling and hazard mapping of complex cascading mass movement processes: the case of glacier lake 513, Carhuaz, Peru

    Science.gov (United States)

    Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo

    2013-04-01

    The Cordilleras in Peru are especially vulnerable to, and affected by impacts from climate change. Local communities and cities often exist directly within the reach of major hazard potentials such as lake outburst floods (aluviones), mud-/debris flows (huaycos) or large rock-/ice avalanches. They have been repeatedly and strongly affected these regions over the last decades and since the last century, and thousands of people have been killed. One of the most recent events in the Cordillera Blanca occurred on 11 April 2010, when a rock/ice avalanche from the top of Hualcán mountain, NE of the town of Carhuaz impacted the glacier lake 513 (Laguna 513), caused displacement waves and triggered an outburst flood wave. The flow repeatedly transformed from debris flow to hyperconcentrated flow and eventually caused significant damage in Carhuaz. This event was motivation to start early warning and prevention efforts to reduce risks related to ice/rock avalanches and glacier lake outburst floods (GLOF). One of the basic components of an early warning system is the assessment, understanding and communication of relevant hazards and risks. Here we report on the methodology and results of generating GLOF related hazard maps for Carhuaz based on numerical modeling and field work. This exercise required an advanced concept and implementation of different mass movement models. Specifically, numerical models were applied for simulating avalanche flow, avalanche lake impact, displacement wave generation and lake overtopping, and eventually flow propagation of the outburst flood with changing rheology between debris flow and hyperconcentrated flows. We adopted a hazard mapping procedure slightly adjusted adjusted from guidelines developed in Switzerland and in the Andes region. A methodology has thereby been developed to translate results from numerical mass movement modeling into hazard maps. The resulting hazard map was verified and adjusted during field work. This study shows

  3. Modelling of Specific Moisture Extraction Rate and Leakage Ratio in a Condensing Tumble Dryer

    OpenAIRE

    Stawreberg, Lena; Nilsson, Lars

    2010-01-01

    Abstract The use of tumble dryers in households is becoming more common. Tumble dryers, however, consume large amounts of electric energy. A statistical model over the tumble dryer is created from a design of experiments. The model will be used to find the best settings for the power supply to the heater, the internal airflow and the external airflow in order to reach a high specific moisture extraction rate (SMER) and a low leakage ratio of water vapour. The aim also involves expl...

  4. A Wake Model for the Prediction of Propeller Performance at Low Advance Ratios

    Directory of Open Access Journals (Sweden)

    Ye Tian

    2012-01-01

    Full Text Available A low order panel method is used to predict the performance of propellers. A wake alignment model based on a pseudounsteady scheme is proposed and implemented. The results from this full wake alignment (FWA model are correlated with available experimental data, and results from RANS for some propellers at design and low advance ratios. Significant improvements have been found in the predicted integrated forces and pressure distributions.

  5. Effect of silica/titania ratio on enhanced photooxidation of industrial hazardous materials by microwave treated mesoporous SBA-15/TiO2 nanocomposites

    Science.gov (United States)

    Mehta, Akansha; Mishra, Amit; Sharma, Manisha; Singh, Satnam; Basu, Soumen

    2016-07-01

    In this study microwave assisted technique has been adopted for the synthesis of different weight ratios of TiO2 dispersed on Santa barbara amorphous-15 (SBA-15) support. Morphological study revealed TiO2 particles (4-10 nm) uniformly distributed on SBA-15 while increases in SBA-15 content results in higher specific surface area (524-237 m2/g). The diffraction intensity of 101 plane of anatase polymorph was seen increasing with increase in TiO2 ratio. All the photocatalysts were having a mesoporous nature and follow the Langmuir IV isotherm, SBA-15 posses the highest pore volume (0.93 cm3 g-1) which consistently decreased with TiO2 content and was lowest (0.50 cm3 g-1) in case of 5 wt% of TiO2 followed by P25 (0.45 cm3 g-1) while pore diameter increased after TiO2 incorporation due to pore strain. The photocatalytic activity of the nanocomposites were analysed for the photodegradation of alizarin dye and pentachlorophenol under UV light irradiation. The reaction kinetics suggested the highest efficiency (98 % for alizarin and 94 % for PCP) of 5 wt% TiO2 compared to other photocatalysts, these nanocomposites were reused for several cycles, which is most important for heterogeneous photocatalytic degradation reaction.

  6. Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)

    Energy Technology Data Exchange (ETDEWEB)

    Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)

    2009-05-15

    The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)

  7. A Course in Hazardous Chemical Spills: Use of the CAMEO Air Dispersion Model to Predict Evacuation Distances.

    Science.gov (United States)

    Kumar, Ashok; And Others

    1989-01-01

    Provides an overview of the Computer-Aided Management of Emergency Operations (CAMEO) model and its use in the classroom as a training tool in the "Hazardous Chemical Spills" course. Presents six problems illustrating classroom use of CAMEO. Lists 16 references. (YP)

  8. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  9. A Test of Carbon and Oxygen Stable Isotope Ratio Process Models in Tree Rings.

    Science.gov (United States)

    Roden, J. S.; Farquhar, G. D.

    2008-12-01

    Stable isotopes ratios of carbon and oxygen in tree ring cellulose have been used to infer environmental change. Process-based models have been developed to clarify the potential of historic tree ring records for meaningful paleoclimatic reconstructions. However, isotopic variation can be influenced by multiple environmental factors making simplistic interpretations problematic. Recently, the dual isotope approach, where the variation in one stable isotope ratio (e.g. oxygen) is used to constrain the interpretation of variation in another (e.g. carbon), has been shown to have the potential to de-convolute isotopic analysis. However, this approach requires further testing to determine its applicability for paleo-reconstructions using tree-ring time series. We present a study where the information needed to parameterize mechanistic models for both carbon and oxygen stable isotope ratios were collected in controlled environment chambers for two species (Pinus radiata and Eucalyptus globulus). The seedlings were exposed to treatments designed to modify leaf temperature, transpiration rates, stomatal conductance and photosynthetic capacity. Both species were grown for over 100 days under two humidity regimes that differed by 20%. Stomatal conductance was significantly different between species and for seedlings under drought conditions but not between other treatments or humidity regimes. The treatments produced large differences in transpiration rate and photosynthesis. Treatments that effected photosynthetic rates but not stomatal conductance influenced carbon isotope discrimination more than those that influenced primarily conductance. The various treatments produced a range in oxygen isotope ratios of 7 ‰. Process models predicted greater oxygen isotope enrichment in tree ring cellulose than observed. The oxygen isotope ratios of bulk leaf water were reasonably well predicted by current steady-state models. However, the fractional difference between models that

  10. SCEC Community Modeling Environment (SCEC/CME) - Seismic Hazard Analysis Applications and Infrastructure

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, B.; SCEC ITR Collaboration

    2003-12-01

    The Southern California Earthquake Center (SCEC) has formed a Geoscience/IT partnership to develop an advanced information infrastructure for system-level earthquake science in Southern California. This SCEC/ITR partnership comprises SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. This collaboration recently completed the second year in a five-year National Science Foundation (NSF) funded ITR project called the SCEC Community Modeling Environment (SCEC/CME). The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed by project collaborators include a Probabilistic Seismic Hazard Analysis system called OpenSHA [Field et al., this meeting]. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERF's). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. A Rupture Dynamic Model (RDM) has also been developed that couples a rupture dynamics simulation into an anelastic wave model. The collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of SHA programs. To support computationally expensive simulations, we have constructed a grid-based system utilizing Globus software [Kesselman et al., this meeting]. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC, NPACI and Teragrid High Performance Computing Centers. We have

  11. MODELING AND FORECASTING THE GROSS ENROLLMENT RATIO IN ROMANIAN PRIMARY SCHOOL

    Directory of Open Access Journals (Sweden)

    MARINOIU CRISTIAN

    2014-06-01

    Full Text Available The gross enrollment ratio in primary school is one of the basic indicators used in order to evaluate the proposed objectives of the educational system. Knowing its evolution allows a more rigorous substantiation of the strategies and of the human resources politics not only from the educational field but also from the economic one. In this paper we propose an econometric model in order to describe the gross enrollment ratio in Romanian primary school and we achieve its prediction for the next years, having as a guide the Box-Jenkins’s methodology. The obtained results indicate the continuous decrease of this rate for the next years.

  12. A void ratio dependent water retention curve model including hydraulic hysteresis

    Directory of Open Access Journals (Sweden)

    Pasha Amin Y.

    2016-01-01

    Full Text Available Past experimental evidence has shown that Water Retention Curve (WRC evolves with mechanical stress and structural changes in soil matrix. Models currently available in the literature for capturing the volume change dependency of WRC are mainly empirical in nature requiring an extensive experimental programme for parameter identification which renders them unsuitable for practical applications. In this paper, an analytical model for the evaluation of the void ratio dependency of WRC in deformable porous media is presented. The approach proposed enables quantification of the dependency of WRC on void ratio solely based on the form of WRC at the reference void ratio and requires no additional parameters. The effect of hydraulic hysteresis on the evolution process is also incorporated in the model, an aspect rarely addressed in the literature. Expressions are presented for the evolution of main and scanning curves due to loading and change in the hydraulic path from scanning to main wetting/drying and vice versa as well as the WRC parameters such as air entry value, air expulsion value, pore size distribution index and slope of the scanning curve. The model is validated using experimental data on compacted and reconstituted soils subjected to various hydro-mechanical paths. Good agreement is obtained between model predictions and experimental data in all the cases considered.

  13. Implications of different digital elevation models and preprocessing techniques to delineate debris flow inundation hazard zones in El Salvador

    Science.gov (United States)

    Anderson, E. R.; Griffin, R.; Irwin, D.

    2013-12-01

    Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values

  14. Modified likelihood ratio tests in heteroskedastic multivariate regression models with measurement error

    CERN Document Server

    Melo, Tatiane F N; Patriota, Alexandre G

    2012-01-01

    In this paper, we develop a modified version of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard adjusted likelihood ratio statistic, which follows a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed test displays superior finite sample behavior as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.

  15. Examining school-based bullying interventions using multilevel discrete time hazard modeling.

    Science.gov (United States)

    Ayers, Stephanie L; Wagaman, M Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E C

    2012-10-01

    Although schools have been trying to address bullying by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K - 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR = 0.65, p connection between the students' mesosystems as well as utilizing disciplinary strategies that take into consideration student's microsystem roles.

  16. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  17. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  18. Interband B (E2) ratios in the rigid triaxial model, a review

    Science.gov (United States)

    Gupta, J. B.; Sharma, S.

    1989-01-01

    Uptodate accurate extensive data on γ-g B(E2) ratios for even-even rare-earth nuclei is compared with the predictions of the rigid triaxial model of collective rotation to search for a correlation between the nuclear structure variation with Z, N and the γ0 parameter of the model. The internal consistency in the predictions of the model is investigated and the spectral features vis-a-vis the γ-soft and the γ-rigid potential are discussed.

  19. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  20. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  1. Challenges in seismic hazard assessment: Analyses of ground motion modelling and seismotectonic sources

    OpenAIRE

    Sørensen, Mathilde Bøttger

    2006-01-01

    Seismic hazard assessment has an important societal impact in describing levels of ground motions to be expected in a given region in the future. Challenges in seismic hazard assessment are closely associated with the fact that different regions, due to their differences in seismotectonics setting (and hence in earthquake occurrence) as well as socioeconomic conditions, require different and innovative approaches. One of the most important aspects in this regard is the seismici...

  2. Modeling lifetime data with multiple causes using cause specific reversed hazard rates

    Directory of Open Access Journals (Sweden)

    Paduthol Godan Sankaran

    2014-09-01

    Full Text Available In this paper we introduce and study cause specific reversed hazard rates in the context of left censored lifetime data with multiple causes. Nonparametric inference procedure for left censored lifetime data with multiple causes using cause specific reversed hazard rate is discussed. Asymptotic properties of the estimators are studied. Simulation studies are conducted to assess the efficiency of the estimators. Further, the proposed method is applied to mice mortality data (Hoel 1972 and Australian twin data (Duffy et al. 1990.

  3. Simulating floods : On the application of a 2D-hydraulic model for flood hazard and risk assessment

    OpenAIRE

    Alkema, D.

    2007-01-01

    Over the last decades, river floods in Europe seem to occur more frequently and are causing more and more economic and emotional damage. Understanding the processes causing flooding and the development of simulation models to evaluate countermeasures to control that damage are important issues. This study deals with the application of a 2D hydraulic flood propagation model for flood hazard and risk assessment. It focuses on two components: 1) how well does it predict the spatial-dynamic chara...

  4. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  5. Enhanced mathematical modeling of the displacement amplification ratio for piezoelectric compliant mechanisms

    Science.gov (United States)

    Ling, Mingxiang; Cao, Junyi; Zeng, Minghua; Lin, Jing; Inman, Daniel J.

    2016-07-01

    Piezo-actuated, flexure hinge-based compliant mechanisms have been frequently used in precision engineering in the last few decades. There have been a considerable number of publications on modeling the displacement amplification behavior of rhombus-type and bridge-type compliant mechanisms. However, due to an unclear geometric approximation and mechanical assumption between these two flexures, it is very difficult to obtain an exact description of the kinematic performance using previous analytical models, especially when the designed angle of the compliant mechanisms is small. Therefore, enhanced theoretical models of the displacement amplification ratio for rhombus-type and bridge-type compliant mechanisms are proposed to improve the prediction accuracy based on the distinct force analysis between these two flexures. The energy conservation law and the elastic beam theory are employed for modeling with consideration of the translational and rotational stiffness. Theoretical and finite elemental results show that the prediction errors of the displacement amplification ratio will be enlarged if the bridge-type flexure is simplified as a rhombic structure to perform mechanical modeling. More importantly, the proposed models exhibit better performance than the previous models, which is further verified by experiments.

  6. Combining observations and model simulations to reduce the hazard of Etna volcanic ash plumes

    Science.gov (United States)

    Scollo, Simona; Boselli, Antonella; Coltelli, Mauro; Leto, Giuseppe; Pisani, Gianluca; Prestifilippo, Michele; Spinelli, Nicola; Wang, Xuan; Zanmar Sanchez, Ricardo

    2014-05-01

    Etna is one of the most active volcanoes in the world with a recent activity characterized by powerful lava fountains that produce several kilometres high eruption columns and disperse volcanic ash in the atmosphere. It is well known that, to improve the volcanic ash dispersal forecast of an ongoing explosive eruption, input parameters used by volcanic ash dispersal models should be measured during the eruption. In this work, in order to better quantify the volcanic ash dispersal, we use data from the video-surveillance system of Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo, and from the lidar system together with a volcanic ash dispersal model. In detail, the visible camera installed in Catania, 27 km from the vent is able to evaluate the evolution of column height with time. The Lidar, installed at the "M.G. Fracastoro" astrophysical observatory (14.97° E, 37.69° N) of the Istituto Nazionale di Astrofisica in Catania, located at a distance of 7 km from the Etna summit craters, uses a frequency doubled Nd:YAG laser source operating at a 532-nm wavelength, with a repetition rate of 1 kHz. Backscattering and depolarization values measured by the Lidar system can give, with a certain degree of uncertainty, an estimation of volcanic ash concentration in atmosphere. The 12 August 2011 activity is considered a perfect test case because volcanic plume was retrieved by both camera and Lidar. We evaluated the mass eruption rate from the column height and used best fit procedures comparing simulated volcanic ash concentrations with those extracted by the Lidar data. During this event, powerful lava fountains were well visible at about 08:30 GMT and a sustained eruption column was produced since about 08:55 GMT. Ash emission completely ceased around 11:30 GMT. The proposed approach is an attempt to produce more robust ash dispersal forecasts reducing the hazard to air traffic during Etna volcanic crisis.

  7. Generating survival times to simulate Cox proportional hazards models with time-varying covariates.

    Science.gov (United States)

    Austin, Peter C

    2012-12-20

    Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate.

  8. Using a ballistic-caprock model for developing a volcanic projectiles hazard map at Santorini caldera

    Science.gov (United States)

    Konstantinou, Konstantinos

    2015-04-01

    Volcanic Ballistic Projectiles (VBPs) are rock/magma fragments of variable size that are ejected from active vents during explosive eruptions. VBPs follow almost parabolic trajectories that are influenced by gravity and drag forces before they reach their impact point on the Earth's surface. Owing to their high temperature and kinetic energies, VBPs can potentially cause human casualties, severe damage to buildings as well as trigger fires. Since the Minoan eruption the Santorini caldera has produced several smaller (VEI = 2-3) vulcanian eruptions, the last of which occurred in 1950, while in 2011 it also experienced significant deformation/seismicity even though no eruption eventually occurred. In this work, an eruptive model appropriate for vulcanian eruptions is used to estimate initial conditions (ejection height, velocity) for VBPs assuming a broad range of gas concentration/overpressure in the vent. These initial conditions are then inserted into a ballistic model for the purpose of calculating the maximum range of VBPs for different VBP sizes (0.35-3 m), varying drag coefficient as a function of VBP speed and varying air density as a function of altitude. In agreement with previous studies a zone of reduced drag is also included in the ballistic calculations that is determined based on the size of vents that were active in the Kameni islands during previous eruptions (< 1 km). Results show that the horizontal range of VBPs varies between 0.9-3 km and greatly depends on gas concentration, the extent of the reduced drag zone and the size of VBP. Hazard maps are then constructed by taking into account the maximum horizontal range values as well as potential locations of eruptive vents along a NE-SW direction around the Kameni islands (the so-called "Kameni line").

  9. A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard

    Science.gov (United States)

    Alaeddine, H.; Serrhini, K.; Maizia, M.

    2015-03-01

    Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.

  10. Flow-R, a model for susceptibility mapping of debris flows and other gravitational hazards at a regional scale

    Directory of Open Access Journals (Sweden)

    P. Horton

    2013-04-01

    Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time

  11. Multiplicative models for survival percentiles: estimating percentile ratios and multiplicative interaction in the metric of time

    Directory of Open Access Journals (Sweden)

    Andrea Bellavia

    2016-09-01

    Full Text Available Evaluating percentiles of survival was proposed as a possible method to analyze time-to-event outcomes. This approach sets the cumulative risk of the event of interest to a specific proportion and evaluates the time by which this proportion is attainedIn this context, exposure-outcome associations can be expressed in terms of differences in survival percentiles, expressing the difference in survival time by which different subgroups of the study population experience the same proportion of events, or in terms of percentile ratios, expressing the strength of the exposure in accelerating the time to the event. Additive models for conditional survival percentiles have been introduced, and their use to estimate multivariable-adjusted percentile differences, and additive interaction on the metric of time has been described. On the other hand, the percentile ratio has never been fully described, neither statistical methods have been presented for its models-based estimation. To bridge this gap, we provide a detailed presentation of the percentile ratio as a relative measure to assess exposure-outcome associations in the context of time-to-event analysis, discussing its interpretation and advantages. We then introduce multiplicative statistical models for conditional survival percentiles, and present their use in estimating percentile ratios and multiplicative interactions in the metric of time. The introduction of multiplicative models for survival percentiles allows researchers to apply this approach in a large variety of context where multivariable adjustment is required, enriching the potentials of the percentile approach as a flexible and valuable tool to evaluate time-to-event outcomes in medical research.

  12. Numerical modeling of debris avalanches at Nevado de Toluca (Mexico): implications for hazard evaluation and mapping

    Science.gov (United States)

    Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.

    2007-05-01

    The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations

  13. Pattern Formation in a Cross-Diffusive Ratio-Dependent Predator-Prey Model

    Directory of Open Access Journals (Sweden)

    Xinze Lian

    2012-01-01

    Full Text Available This paper presents a theoretical analysis of evolutionary process that involves organisms distribution and their interaction of spatial distribution of the species with self- and cross-diffusion in a Holling-III ratio-dependent predator-prey model. The diffusion instability of the positive equilibrium of the model with Neumann boundary conditions is discussed. Furthermore, we present novel numerical evidence of time evolution of patterns controlled by self- and cross-diffusion in the model and find that the model dynamics exhibits a cross-diffusion controlled formation growth to spots, stripes, and spiral wave pattern replication, which show that reaction-diffusion model is useful to reveal the spatial predation dynamics in the real world.

  14. Mendel's use of mathematical modelling: ratios, predictions and the appeal to tradition.

    Science.gov (United States)

    Teicher, Amir

    2014-01-01

    The seventh section of Gregor Mendel's famous 1866 paper contained a peculiar mathematical model, which predicted the expected ratios between the number of constant and hybrid types, assuming self-pollination continued throughout further generations. This model was significant for Mendel's argumentation and was perceived as inseparable from his entire theory at the time. A close examination of this model reveals that it has several perplexing aspects which have not yet been systematically scrutinized. The paper analyzes those aspects, dispels some common misconceptions regarding the interpretation of the model, and re-evaluates the role of this model for Mendel himself. In light of the resulting analysis, Mendel's position between nineteenth-century hybridist tradition and twentieth-century population genetics is reassessed, and his sophisticated use of mathematics to legitimize his innovative theory is uncovered.

  15. ENTROPY-COST RATIO MAXIMIZATION MODEL FOR EFFICIENT STOCK PORTFOLIO SELECTION USING INTERVAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Mainak Dey

    2013-02-01

    Full Text Available This paper introduces a new stock portfolio selection model in non-stochastic environment. Following the principle of maximum entropy, a new entropy-cost ratio function is introduced as the objective function. The uncertain returns, risks and dividends of the securities are considered as interval numbers. Along with the objective function, eight different types of constraints are used in the model to convert it into a pragmatic one. Three different models have been proposed by defining the future financial market optimistically, pessimistically and in the combined form to model the portfolio selection problem. To illustrate the effectiveness and tractability of the proposed models, these are tested on a set of data from Bombay Stock Exchange (BSE. The solution has been done by genetic algorithm.

  16. Atmospheric Modelling for Neptune's Methane D/H Ratio - Preliminary Results

    CERN Document Server

    Cotton, Daniel V; Bott, Kimberly; Bailey, Jeremy

    2015-01-01

    The ratio of deuterium to hydrogen (D/H ratio) of Solar System bodies is an important clue to their formation histories. Here we fit a Neptunian atmospheric model to Gemini Near Infrared Spectrograph (GNIRS) high spectral resolution observations and determine the D/H ratio in methane absorption in the infrared H-band ($\\sim$ 1.6 {\\mu}m). The model was derived using our radiative transfer software VSTAR (Versatile Software for the Transfer of Atmospheric Radiation) and atmospheric fitting software ATMOF (ATMOspheric Fitting). The methane line list used for this work has only become available in the last few years, enabling a refinement of earlier estimates. We identify a bright region on the planetary disc and find it to correspond to an optically thick lower cloud. Our preliminary determination of CH$_{\\rm 3}$D/CH$_{\\rm 4}$ is 3.0$\\times10^{-4}$, which is in line with the recent determination of Irwin et al. (2014) of 3.0$^{+1.0}_{-0.9}\\sim\\times10^{-4}$, made using the same model parameters and line list but...

  17. A 3-dimensional in vitro model of epithelioid granulomas induced by high aspect ratio nanomaterials

    Directory of Open Access Journals (Sweden)

    Hurt Robert H

    2011-05-01

    Full Text Available Abstract Background The most common causes of granulomatous inflammation are persistent pathogens and poorly-degradable irritating materials. A characteristic pathological reaction to intratracheal instillation, pharyngeal aspiration, or inhalation of carbon nanotubes is formation of epithelioid granulomas accompanied by interstitial fibrosis in the lungs. In the mesothelium, a similar response is induced by high aspect ratio nanomaterials, including asbestos fibers, following intraperitoneal injection. This asbestos-like behaviour of some engineered nanomaterials is a concern for their potential adverse health effects in the lungs and mesothelium. We hypothesize that high aspect ratio nanomaterials will induce epithelioid granulomas in nonadherent macrophages in 3D cultures. Results Carbon black particles (Printex 90 and crocidolite asbestos fibers were used as well-characterized reference materials and compared with three commercial samples of multiwalled carbon nanotubes (MWCNTs. Doses were identified in 2D and 3D cultures in order to minimize acute toxicity and to reflect realistic occupational exposures in humans and in previous inhalation studies in rodents. Under serum-free conditions, exposure of nonadherent primary murine bone marrow-derived macrophages to 0.5 μg/ml (0.38 μg/cm2 of crocidolite asbestos fibers or MWCNTs, but not carbon black, induced macrophage differentiation into epithelioid cells and formation of stable aggregates with the characteristic morphology of granulomas. Formation of multinucleated giant cells was also induced by asbestos fibers or MWCNTs in this 3D in vitro model. After 7-14 days, macrophages exposed to high aspect ratio nanomaterials co-expressed proinflammatory (M1 as well as profibrotic (M2 phenotypic markers. Conclusions Induction of epithelioid granulomas appears to correlate with high aspect ratio and complex 3D structure of carbon nanotubes, not with their iron content or surface area. This model

  18. Decision support model for assessing aquifer pollution hazard and prioritizing groundwater resources management in the wet Pampa plain, Argentina.

    Science.gov (United States)

    Lima, M Lourdes; Romanelli, Asunción; Massone, Héctor E

    2013-06-01

    This paper gives an account of the implementation of a decision support system for assessing aquifer pollution hazard and prioritizing subwatersheds for groundwater resources management in the southeastern Pampa plain of Argentina. The use of this system is demonstrated with an example from Dulce Stream Basin (1,000 km(2) encompassing 27 subwatersheds), which has high level of agricultural activities and extensive available data regarding aquifer geology. In the logic model, aquifer pollution hazard is assessed as a function of two primary topics: groundwater and soil conditions. This logic model shows the state of each evaluated landscape with respect to aquifer pollution hazard based mainly on the parameters of the DRASTIC and GOD models. The decision model allows prioritizing subwatersheds for groundwater resources management according to three main criteria including farming activities, agrochemical application, and irrigation use. Stakeholder participation, through interviews, in combination with expert judgment was used to select and weight each criterion. The resulting subwatershed priority map, by combining the logic and decision models, allowed identifying five subwatersheds in the upper and middle basin as the main aquifer protection areas. The results reasonably fit the natural conditions of the basin, identifying those subwatersheds with shallow water depth, loam-loam silt texture soil media and pasture land cover in the middle basin, and others with intensive agricultural activity, coinciding with the natural recharge area to the aquifer system. Major difficulties and some recommendations of applying this methodology in real-world situations are discussed.

  19. First look at changes in flood hazard in the Inter-Sectoral Impact Model Intercomparison Project ensemble

    Science.gov (United States)

    Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik

    2014-03-01

    Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.

  20. A drought hazard assessment index based on the VIC-PDSI model and its application on the Loess Plateau, China

    Science.gov (United States)

    Zhang, Baoqing; Wu, Pute; Zhao, Xining; Wang, Yubao; Gao, Xiaodong; Cao, Xinchun

    2013-10-01

    Drought is a complex natural hazard that is poorly understood and difficult to assess. This paper describes a VIC-PDSI model approach to understanding drought in which the Variable Infiltration Capacity (VIC) Model was combined with the Palmer Drought Severity Index (PDSI). Simulated results obtained using the VIC model were used to replace the output of the more conventional two-layer bucket-type model for hydrological accounting, and a two-class-based procedure for calibrating the characteristic climate coefficient ( K j ) was introduced to allow for a more reliable computation of the PDSI. The VIC-PDSI model was used in conjunction with GIS technology to create a new drought assessment index (DAI) that provides a comprehensive overview of drought duration, intensity, frequency, and spatial extent. This new index was applied to drought hazard assessment across six subregions of the whole Loess Plateau. The results show that the DAI over the whole Loess Plateau ranged between 11 and 26 (the greater value of the DAI means the more severe of the drought hazard level). The drought hazards in the upper reaches of Yellow River were more severe than that in the middle reaches. The drought prone regions over the study area were mainly concentrated in Inner Mongolian small rivers, Zuli and Qingshui Rivers basin, while the drought hazards in the drainage area between Hekouzhen-Longmen and Weihe River basin were relatively mild during 1971-2010. The most serious drought vulnerabilities were associated with the area around Lanzhou, Zhongning, and Yinchuan, where the development of water-saving irrigation is the most direct and effective way to defend against and reduce losses from drought. For the relatively humid regions, it will be necessary to establish the rainwater harvesting systems, which could help to relieve the risk of water shortage and guarantee regional food security. Due to the DAI considers the multiple characteristic of drought duration, intensity, frequency

  1. Modeling the behavior of signal-to-noise ratio for repeated snapshot imaging

    CERN Document Server

    Li, Junhui; Yang, Dongyue; Wu, Guohua; Yin, Longfei; Guo, Hong

    2016-01-01

    For imaging of static object by the means of sequential repeated independent measurements, a theoretical modeling of the behavior of signal-to-noise ratio (SNR) with varying number of measurement is developed, based on the information capacity of optical imaging systems. Experimental veritification of imaging using pseudo-thermal light source is implemented, for both the direct average of multiple measurements, and the image reconstructed by second order fluctuation correlation (SFC) which is closely related to ghost imaging. Successful curve fitting of data measured under different conditions verifies the model.

  2. Mathematical Decision Models Applied for Qualifying and Planning Areas Considering Natural Hazards and Human Dealing

    Science.gov (United States)

    Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego

    2014-05-01

    The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional

  3. Examining HPV threat-to-efficacy ratios in the Extended Parallel Process Model.

    Science.gov (United States)

    Carcioppolo, Nick; Jensen, Jakob D; Wilson, Steven R; Collins, W Bart; Carrion, Melissa; Linnemeier, Georgiann

    2013-01-01

    The Extended Parallel Process Model (EPPM) posits that an effective fear appeal includes both threat and efficacy components; however, research has not addressed whether there is an optimal threat-to-efficacy ratio. It is possible that varying levels of threat and efficacy in a persuasive message could yield different effects on attitudes, beliefs, and behaviors. In a laboratory experiment, women (n = 442) were exposed to human papilloma virus (HPV) prevention messages containing one of six threat-to-efficacy ratios and one of two message frames (messages emphasizing the connection between HPV and cervical cancer or HPV and genital warts). Multiple mediation analysis revealed that a 1-to-1 ratio of threat to efficacy was most effective at increasing prevention intentions, primarily because it caused more fear and risk susceptibility than other message ratios. Response efficacy significantly mediated the relationship between message framing and intentions, such that participants exposed to a genital warts message reported significantly higher intentions, and this association can be explained in part through response efficacy. Implications for future theoretical research as well as campaigns and intervention research are discussed.

  4. Modeling Lahar Hazard Zones for Eruption-Generated Lahars from Lassen Peak, California

    Science.gov (United States)

    Robinson, J. E.; Clynne, M. A.

    2010-12-01

    Lassen Peak, a high-elevation, seasonally snow-covered peak located within Lassen Volcanic National Park, has lahar deposits in several drainages that head on or near the lava dome. This suggests that these drainages are susceptible to future lahars. The majority of the recognized lahar deposits are related to the May 19 and 22, 1915 eruptions of Lassen Peak. These small-volume eruptions generated lahars and floods when an avalanche of snow and hot rock, and a pyroclastic flow moved across the snow-covered upper flanks of the lava dome. Lahars flowed to the north down Lost Creek and Hat Creek. In Lost Creek, the lahars flowed up to 16 km downstream and deposited approximately 8.3 x 106 m3 of sediment. This study uses geologic mapping of the 1915 lahar deposits as a guide for LAHARZ modeling to assist in the assessment of present-day susceptibility for lahars in drainages heading on Lassen Peak. The LAHARZ model requires a Height over Length (H/L) energy cone controlling the initiation point of a lahar. We chose a H/L cone with a slope of 0.3 that intersects the earth’s surface at the break in slope at the base of the volcanic dome. Typically, the snow pack reaches its annual maximum by May. Average and maximum May snow-water content, a depth of water equal to 2.1 m and 3.5 m respectively, were calculated from a local snow gauge. A potential volume for individual 1915 lahars was calculated using the deposit volume, the snow-water contents, and the areas stripped of snow by the avalanche and pyroclastic flow. The calculated individual lahars in Lost Creek ranged in size from 9 x 106 m3 to 18.4 x 106 m3. These volumes modeled in LAHARZ matched the 1915 lahars remarkably well, with the modeled flows ending within 4 km of the mapped deposits. We delineated six drainage basins that head on or near Lassen Peak with the highest potential for lahar hazards: Lost Creek, Hat Creek, Manzanita Creek, Mill Creek, Warner Creek, and Bailey Creek. We calculated the area of each

  5. Subduction zone and crustal dynamics of western Washington; a tectonic model for earthquake hazards evaluation

    Science.gov (United States)

    Stanley, Dal; Villaseñor, Antonio; Benz, Harley

    1999-01-01

    The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This

  6. System Dynamics Model to develop resilience management strategies for lifelines exposed to natural hazards

    Science.gov (United States)

    Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele

    2016-04-01

    Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters

  7. Modeling the Plasma Flow in the Inner Heliosheath with a Spatially Varying Compression Ratio

    Science.gov (United States)

    Nicolaou, G.; Livadiotis, G.

    2017-03-01

    We examine a semi-analytical non-magnetic model of the termination shock location previously developed by Exarhos & Moussas. In their study, the plasma flow beyond the shock is considered incompressible and irrotational, thus the flow potential is analytically derived from the Laplace equation. Here we examine the characteristics of the downstream flow in the heliosheath in order to resolve several inconsistencies existing in the Exarhos & Moussas model. In particular, the model is modified in order to be consistent with the Rankine-Hugoniot jump conditions and the geometry of the termination shock. It is shown that a shock compression ratio varying along the latitude can lead to physically correct results. We describe the new model and present several simplified examples for a nearly spherical, strong termination shock. Under those simplifications, the upstream plasma is nearly adiabatic for large (˜100 AU) heliosheath thickness.

  8. Education and risk of coronary heart disease: assessment of mediation by behavioral risk factors using the additive hazards model.

    Science.gov (United States)

    Nordahl, Helene; Rod, Naja Hulvej; Frederiksen, Birgitte Lidegaard; Andersen, Ingelise; Lange, Theis; Diderichsen, Finn; Prescott, Eva; Overvad, Kim; Osler, Merete

    2013-02-01

    Educational-related gradients in coronary heart disease (CHD) and mediation by behavioral risk factors are plausible given previous research; however this has not been comprehensively addressed in absolute measures. Questionnaire data on health behavior of 69,513 participants, 52 % women, from seven Danish cohort studies were linked to registry data on education and incidence of CHD. Mediation by smoking, low physical activity, and body mass index (BMI) on the association between education and CHD were estimated by applying newly proposed methods for mediation based on the additive hazards model, and compared with results from the Cox proportional hazards model. Short (vs. long) education was associated with 277 (95 % CI: 219, 336) additional cases of CHD per 100,000 person-years at risk among women, and 461 (95 % CI: 368, 555) additional cases among men. Of these additional cases 17 (95 % CI: 12, 22) for women and 37 (95 % CI: 28, 46) for men could be ascribed to the pathway through smoking. Further, 39 (95 % CI: 30, 49) cases for women and 94 (95 % CI: 79, 110) cases for men could be ascribed to the pathway through BMI. The effects of low physical activity were negligible. Using contemporary methods, the additive hazards model, for mediation we indicated the absolute numbers of CHD cases prevented when modifying smoking and BMI. This study confirms previous claims based on the Cox proportional hazards model that behavioral risk factors partially mediates the effect of education on CHD, and the results seems not to be particularly model dependent.

  9. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method.

    Science.gov (United States)

    Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas

    2014-02-01

    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications.

  10. Indoor-to-outdoor particle concentration ratio model for human exposure analysis

    Science.gov (United States)

    Lee, Jae Young; Ryu, Sung Hee; Lee, Gwangjae; Bae, Gwi-Nam

    2016-02-01

    This study presents an indoor-to-outdoor particle concentration ratio (IOR) model for improved estimates of indoor exposure levels. This model is useful in epidemiological studies with large population, because sampling indoor pollutants in all participants' house is often necessary but impractical. As a part of a study examining the association between air pollutants and atopic dermatitis in children, 16 parents agreed to measure the indoor and outdoor PM10 and PM2.5 concentrations at their homes for 48 h. Correlation analysis and multi-step multivariate linear regression analysis was performed to develop the IOR model. Temperature and floor level were found to be powerful predictors of the IOR. Despite the simplicity of the model, it demonstrated high accuracy in terms of the root mean square error (RMSE). Especially for long-term IOR estimations, the RMSE was as low as 0.064 and 0.063 for PM10 and PM2.5, respectively. When using a prediction model in an epidemiological study, understanding the consequence of the modeling error and justifying the use of the model is very important. In the last section, this paper discussed the impact of the modeling error and developed a novel methodology to justify the use of the model.

  11. Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL

    Science.gov (United States)

    Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María

    2016-04-01

    The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from

  12. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the cov......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...

  13. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 1-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  14. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 1-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  15. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 20-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  16. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 100-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  17. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  18. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 20-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  19. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 1-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  20. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  1. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 1-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  2. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 100-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  3. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 20-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  4. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 100-year storm in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  5. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  6. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  7. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 20-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  8. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: 100-year storm in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  9. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  10. Modeling depth from motion parallax with the motion/pursuit ratio

    Directory of Open Access Journals (Sweden)

    Mark eNawrot

    2014-10-01

    Full Text Available The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.

  11. Modeling depth from motion parallax with the motion/pursuit ratio.

    Science.gov (United States)

    Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith

    2014-01-01

    The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.

  12. Wind-tunnel modelling of the tip-speed ratio influence on the wake evolution

    Science.gov (United States)

    Stein, Victor P.; Kaltenbach, Hans-Jakob

    2016-09-01

    Wind-tunnel measurements on the near-wake evolution of a three bladed horizontal axis wind turbine model (HAWT) in the scale 1:O(350) operating in uniform flow conditions and within a turbulent boundary layer at different tip speed ratios are presented. Operational conditions are chosen to exclude Reynolds number effects regarding the turbulent boundary layer as well as the rotor performance. Triple-wire anemometry is used to measure all three velocity components in the mid-vertical and mid-horizontal plane, covering the range from the near- to the far-wake region. In order to analyse wake properties systematically, power and thrust coefficients of the turbine were measured additionally. It is confirmed that realistic modelling of the wake evolution is not possible in a low-turbulence uniform approach flow. Profiles of mean velocity and turbulence intensity exhibit large deviations between the low-turbulence uniform flow and the turbulent boundary layer, especially in the far-wake region. For nearly constant thrust coefficients differences in the evolution of the near-wake can be identified for tip speed ratios in the range from 6.5 to 10.5. It is shown that with increasing downstream distances mean velocity profiles become indistinguishable whereas for turbulence statistics a subtle dependency on the tip speed ratio is still noticeable in the far-wake region.

  13. A prediction model for wind speed ratios at pedestrian level with simplified urban canopies

    Science.gov (United States)

    Ikegaya, N.; Ikeda, Y.; Hagishima, A.; Razak, A. A.; Tanimoto, J.

    2017-02-01

    The purpose of this study is to review and improve prediction models for wind speed ratios at pedestrian level with simplified urban canopies. We adopted an extensive database of velocity fields under various conditions for arrays consisting of cubes, slender or flattened rectangles, and rectangles with varying roughness heights. Conclusions are summarized as follows: first, a new geometric parameter is introduced as a function of the plan area index and the aspect ratio so as to express the increase in virtual density that causes wind speed reduction. Second, the estimated wind speed ratios in the range 0.05 database to within an error of ±25%. Lastly, the effects of the spatial distribution of the flow were investigated by classifying the regions near building models into areas in front of, to the side of, or behind the building. The correlation coefficients between the wind speeds averaged over the entire region, and the front or side region values are larger than 0.8. In contrast, in areas where the influence of roughness elements is significant, such as behind a building, the wind speeds are weakly correlated.

  14. A numerical test method of California bearing ratio on graded crushed rocks using particle flow modeling

    Directory of Open Access Journals (Sweden)

    Yingjun Jiang

    2015-04-01

    Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.

  15. Deriving metabolic engineering strategies from genome-scale modeling with flux ratio constraints.

    Science.gov (United States)

    Yen, Jiun Y; Nazem-Bokaee, Hadi; Freedman, Benjamin G; Athamneh, Ahmad I M; Senger, Ryan S

    2013-05-01

    Optimized production of bio-based fuels and chemicals from microbial cell factories is a central goal of systems metabolic engineering. To achieve this goal, a new computational method of using flux balance analysis with flux ratios (FBrAtio) was further developed in this research and applied to five case studies to evaluate and design metabolic engineering strategies. The approach was implemented using publicly available genome-scale metabolic flux models. Synthetic pathways were added to these models along with flux ratio constraints by FBrAtio to achieve increased (i) cellulose production from Arabidopsis thaliana; (ii) isobutanol production from Saccharomyces cerevisiae; (iii) acetone production from Synechocystis sp. PCC6803; (iv) H2 production from Escherichia coli MG1655; and (v) isopropanol, butanol, and ethanol (IBE) production from engineered Clostridium acetobutylicum. The FBrAtio approach was applied to each case to simulate a metabolic engineering strategy already implemented experimentally, and flux ratios were continually adjusted to find (i) the end-limit of increased production using the existing strategy, (ii) new potential strategies to increase production, and (iii) the impact of these metabolic engineering strategies on product yield and culture growth. The FBrAtio approach has the potential to design "fine-tuned" metabolic engineering strategies in silico that can be implemented directly with available genomic tools.

  16. Revisiting the concept of Redfield ratios applied to plankton stoichiometry - Addressing model uncertainties with respect to the choice of C:N:P ratios for phytoplankton

    Science.gov (United States)

    Kreus, Markus; Paetsch, Johannes; Grosse, Fabian; Lenhart, Hermann; Peck, Myron; Pohlmann, Thomas

    2017-04-01

    Ongoing Ocean Acidification (OA) and climate change related trends impact on physical (temperature), chemical (CO2 buffer capacity) and biological (stoichiometric) properties of the marine environment. These threats affect the global ocean but they appear particularly pronounced in marginal and shelf seas. Marine biogeochemical models are often used to investigate the impacts of climate change and changes in OA on the marine system as well as its exchange with the atmosphere. Different studies showed that both the structural composition of the models and the elemental ratios of particulate organic matter in the surface ocean affect the key processes controlling the ocean's efficiency storing atmospheric excess carbon. Recent studies focus on the variability of the elemental ratios of phytoplankton and found that the high plasticity of C:N:P ratios enables the storage of large amounts of carbon by incorporation into carbohydrates and lipids. Our analysis focuses on the North Sea, a temperate European shelf sea, for the period 2000-2014. We performed an ensemble of model runs differing only in phytoplankton stoichiometry, representing combinations of C:P = [132.5, 106, 79.5] and N:P=[20, 16, 12] (i.e., Redfield ratio +/- 25%). We examine systematically the variations in annual averages of net primary production (NPP), net ecosystem production in the upper 30 m (NEP30), export production below 30 m depth (EXP30), and the air-sea flux of CO2 (ASF). Ensemble average fluxes (and standard deviations) resulted in NPP = 15.4 (2.8) mol C m-2 a-1, NEP30 = 5.4 (1.1) mol C m-2 a-1, EXP30 = 8.1 (1.1) mol C m-2 a-1 and ASF = 1.1 (0.5) mol C m-2 a-1. All key parameters exhibit only minor variations along the axis of constant C:N, but correlate positively with increasing C:P and decreasing N:P ratios. Concerning regional differences, lowest variations in local fluxes due to different stoichiometric ratios can be found in the shallow southern and coastal North Sea. Highest

  17. Aspect Ratio of Receiver Node Geometry based Indoor WLAN Propagation Model

    Science.gov (United States)

    Naik, Udaykumar; Bapat, Vishram N.

    2016-09-01

    This paper presents validation of indoor wireless local area network (WLAN) propagation model for varying rectangular receiver node geometry. The rectangular client node configuration is a standard node arrangement in computer laboratories of academic institutes and research organizations. The model assists to install network nodes for the better signal coverage. The proposed model is backed by wide ranging real time received signal strength measurements at 2.4 GHz. The shadow fading component of signal propagation under realistic indoor environment is modelled with the dependency on varying aspect ratio of the client node geometry. The developed new model is useful in predicting indoor path loss for IEEE 802.11b/g WLAN. The new model provides better performance in comparison to well known International Telecommunication Union and free space propagation models. It is shown that the proposed model is simple and can be a useful tool for indoor WLAN node deployment planning and quick method for the best utilisation of the office space.

  18. Aspect Ratio of Receiver Node Geometry based Indoor WLAN Propagation Model

    Science.gov (United States)

    Naik, Udaykumar; Bapat, Vishram N.

    2017-08-01

    This paper presents validation of indoor wireless local area network (WLAN) propagation model for varying rectangular receiver node geometry. The rectangular client node configuration is a standard node arrangement in computer laboratories of academic institutes and research organizations. The model assists to install network nodes for the better signal coverage. The proposed model is backed by wide ranging real time received signal strength measurements at 2.4 GHz. The shadow fading component of signal propagation under realistic indoor environment is modelled with the dependency on varying aspect ratio of the client node geometry. The developed new model is useful in predicting indoor path loss for IEEE 802.11b/g WLAN. The new model provides better performance in comparison to well known International Telecommunication Union and free space propagation models. It is shown that the proposed model is simple and can be a useful tool for indoor WLAN node deployment planning and quick method for the best utilisation of the office space.

  19. Application of Spatial Regression Models to Income Poverty Ratios in Middle Delta Contiguous Counties in Egypt

    Directory of Open Access Journals (Sweden)

    Sohair F Higazi

    2013-02-01

    Full Text Available Regression analysis depends on several assumptions that have to be satisfied. A major assumption that is never satisfied when variables are from contiguous observations is the independence of error terms. Spatial analysis treated the violation of that assumption by two derived models that put contiguity of observations into consideration. Data used are from Egypt's 2006 latest census, for 93 counties in middle delta seven adjacent Governorates. The dependent variable used is the percent of individuals classified as poor (those who make less than 1$ daily. Predictors are some demographic indicators. Explanatory Spatial Data Analysis (ESDA is performed to examine the existence of spatial clustering and spatial autocorrelation between neighboring counties. The ESDA revealed spatial clusters and spatial correlation between locations. Three statistical models are applied to the data, the Ordinary Least Square regression model (OLS, the Spatial Error Model (SEM and the Spatial Lag Model (SLM.The Likelihood Ratio test and some information criterions are used to compare SLM and SEM to OLS. The SEM model proved to be better than the SLM model. Recommendations are drawn regarding the two spatial models used.

  20. Predicting the Survival Time for Bladder Cancer Using an Addi-tive Hazards Model in Microarray Data

    Directory of Open Access Journals (Sweden)

    Leili TAPAK

    2016-02-01

    Full Text Available Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time.Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods.Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07 and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively. Five out of 19 selected genes by the elastic net were significant (P<0.05 under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it.Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model.Keywords: Survival analysis, Microarray data, Additive hazards model, Variable selection, Bladder cancer 

  1. A framework for modeling clustering in natural hazard catastrophe risk management and the implications for re/insurance loss perspectives

    Directory of Open Access Journals (Sweden)

    S. Khare

    2014-08-01

    Full Text Available In this paper, we present a novel framework for modelling clustering in natural hazard risk models. The framework we present is founded on physical principles where large-scale oscillations in the physical system is the source of non-Poissonian (clustered frequency behaviour. We focus on a particular mathematical implementation of the "Super-Cluster" methodology that we introduce. This mathematical framework has a number of advantages including tunability to the problem at hand, as well as the ability to model cross-event correlation. Using European windstorm data as an example, we provide evidence that historical data show strong evidence of clustering. We then develop Poisson and clustered simulation models for the data, demonstrating clearly the superiority of the clustered model which we have implemented using the Poisson-Mixtures approach. We then discuss the implications of including clustering in models of prices on catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of new insights into the impact clustering has on modelled catXL contract prices. The simple model presented in this paper provides an insightful starting point for practicioners of natural hazard risk modelling.

  2. Allometric scaling and cell ratios in multi-organ in vitro models of human metabolism

    Directory of Open Access Journals (Sweden)

    Nadia eUcciferri

    2014-12-01

    Full Text Available Intelligent in vitro models able to recapitulate the physiological interactions between tissues in the body have enormous potential as they enable detailed studies on specific two-way or higher order tissue communication. These models are the first step towards building an integrated picture of systemic metabolism and signalling in physiological or pathological conditions. However the rational design of in vitro models of cell-cell or cell-tissue interaction is difficult as quite often cell culture experiments are driven by the device used, rather than by design considerations. Indeed very little research has been carried out on in vitro models of metabolism connecting different cell or tissue types in a physiologically and metabolically relevant manner. Here we analyse the physiologic relationship between cells, cell metabolism and exchange in the human body using allometric rules, downscaling them to an organ-on-a plate device. In particular, in order to establish appropriate cell ratios in the system in a rational manner, two different allometric scaling models (Cell Number Scaling Model, CNSM, and Metabolic and Surface Scaling model, MSSM are proposed and applied to a two compartment model of hepatic-vascular metabolic cross-talk. The theoretical scaling studies illustrate that the design and hence relevance of multi-organ models is principally determined by experimental constraints. Two experimentally feasible model configurations are then implemented in a multi-compartment organ-on-a plate device. An analysis of the metabolic response of the two configurations demonstrates that their glucose and lipid balance is quite different, with only one of the two models recapitulating physiological-like homeostasis. In conclusion, not only do cross-talk and physical stimuli play an important role in in vitro models, but the numeric relationship between cells is also crucial to recreate in vitro interactions which can be extrapolated to the in vivo

  3. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  4. [Research on the model of spectral unmixing for minerals based on derivative of ratio spectroscopy].

    Science.gov (United States)

    Zhao, Heng-Qian; Zhang, Li-Fu; Wu, Tai-Xia; Huang, Chang-Ping

    2013-01-01

    The precise analysis of mineral abundance is a key difficulty in hyperspectral remote sensing research. In the present paper, based on linear spectral mixture model, the derivative of ratio spectroscopy (DRS) was introduced for spectral unmixing of visible to short-wave infrared (Vis-SWIR; 0.4 - 2.5 microm) reflectance data. The mixtures of different proportions of plaster and allochite were analyzed to estimate the accuracy of the spectral unmixing model based on DRS. For the best 5 strong linear bands, the Pearson correlation coefficient (PCC) of the abundances and the actual abundances were higher than 99.9%, while the root mean square error (RMSE) is less than 2.2%. The result shows that the new spectral unmixing model based on DRS is simple, of rigorous mathematical proof, and highly precise. It has a great potential in high-precision quantitative analysis of spectral mixture with fixed endmembers.

  5. Estimation of direct effects for survival data by using the Aalen additive hazards model

    DEFF Research Database (Denmark)

    Martinussen, T.; Vansteelandt, S.; Gerster, M.

    2011-01-01

    We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...

  6. Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model

    NARCIS (Netherlands)

    X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)

    2016-01-01

    textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation me

  7. Spatial Modelling of Urban Physical Vulnerability to Explosion Hazards Using GIS and Fuzzy MCDA

    Directory of Open Access Journals (Sweden)

    Yasser Ebrahimian Ghajari

    2017-07-01

    Full Text Available Most of the world’s population is concentrated in accumulated spaces in the form of cities, making the concept of urban planning a significant issue for consideration by decision makers. Urban vulnerability is a major issue which arises in urban management, and is simply defined as how vulnerable various structures in a city are to different hazards. Reducing urban vulnerability and enhancing resilience are considered to be essential steps towards achieving urban sustainability. To date, a vast body of literature has focused on investigating urban systems’ vulnerabilities with regard to natural hazards. However, less attention has been paid to vulnerabilities resulting from man-made hazards. This study proposes to investigate the physical vulnerability of buildings in District 6 of Tehran, Iran, with respect to intentional explosion hazards. A total of 14 vulnerability criteria are identified according to the opinions of various experts, and standard maps for each of these criteria have been generated in a GIS environment. Ultimately, an ordered weighted averaging (OWA technique was applied to generate vulnerability maps for different risk conditions. The results of the present study indicate that only about 25 percent of buildings in the study area have a low level of vulnerability under moderate risk conditions. Sensitivity analysis further illustrates the robustness of the results obtained. Finally, the paper concludes by arguing that local authorities must focus more on risk-reduction techniques in order to reduce physical vulnerability and achieve urban sustainability.

  8. Modelling risk in high hazard operations: integrating technical, organisational and cultural factors

    NARCIS (Netherlands)

    Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.

    2012-01-01

    Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate comp

  9. Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model

    NARCIS (Netherlands)

    X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)

    2016-01-01

    textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation

  10. Modelling risk in high hazard operations: integrating technical, organisational and cultural factors

    NARCIS (Netherlands)

    Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.

    2012-01-01

    Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate

  11. Majority rule has transition ratio 4 on Yule trees under a 2-state symmetric model.

    Science.gov (United States)

    Mossel, Elchanan; Steel, Mike

    2014-11-01

    Inferring the ancestral state at the root of a phylogenetic tree from states observed at the leaves is a problem arising in evolutionary biology. The simplest technique - majority rule - estimates the root state by the most frequently occurring state at the leaves. Alternative methods - such as maximum parsimony - explicitly take the tree structure into account. Since either method can outperform the other on particular trees, it is useful to consider the accuracy of the methods on trees generated under some evolutionary null model, such as a Yule pure-birth model. In this short note, we answer a recently posed question concerning the performance of majority rule on Yule trees under a symmetric 2-state Markovian substitution model of character state change. We show that majority rule is accurate precisely when the ratio of the birth (speciation) rate of the Yule process to the substitution rate exceeds the value 4. By contrast, maximum parsimony has been shown to be accurate only when this ratio is at least 6. Our proof relies on a second moment calculation, coupling, and a novel application of a reflection principle.

  12. Analysis of enhanced modal damping ratio in porous materials using an acoustic-structure interaction model

    Directory of Open Access Journals (Sweden)

    Junghwan Kook

    2014-12-01

    Full Text Available The aim of this paper is to investigate the enhancement of the damping ratio of a structure with embedded microbeam resonators in air-filled internal cavities. In this context, we discuss theoretical aspects in the framework of the effective modal damping ratio (MDR and derive an approximate relation expressing how an increased damping due to the acoustic medium surrounding the microbeam affect the MDR of the macrobeam. We further analyze the effect of including dissipation of the acoustic medium by using finite element (FE analysis with acoustic-structure interaction (ASI using a simple phenomenological acoustic loss model. An eigenvalue analysis is carried out to demonstrate the improvement of the damping characteristic of the macrobeam with the resonating microbeam in the lossy air and the results are compared to a forced vibration analysis for a macrobeam with one or multiple embedded microbeams. Finally we demonstrate the effect of randomness in terms of position and size of microbeams and discuss the difference between the phenomenological acoustic loss model and a full thermoacoustic model.

  13. Conveying Flood Hazard Risk Through Spatial Modeling: A Case Study for Hurricane Sandy-Affected Communities in Northern New Jersey

    Science.gov (United States)

    Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko

    2016-10-01

    The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.

  14. Utilizing NASA Earth Observations to Model Volcanic Hazard Risk Levels in Areas Surrounding the Copahue Volcano in the Andes Mountains

    Science.gov (United States)

    Keith, A. M.; Weigel, A. M.; Rivas, J.

    2014-12-01

    Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.

  15. Conveying Flood Hazard Risk Through Spatial Modeling: A Case Study for Hurricane Sandy-Affected Communities in Northern New Jersey.

    Science.gov (United States)

    Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko

    2016-10-01

    The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.

  16. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    . The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  17. Qualitative analysis on a diffusive prey-predator model with ratio-dependent functional response

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper, we investigate a prey-predator model with diffusion and ratio-dependent functional response subject to the homogeneous Neumann boundary condition. Our main focuses are on the global behavior of the reaction-diffusion system and its corresponding steady-state problem. We first apply various Lyapunov functions to discuss the global stability of the unique positive constant steady-state. Then, for the steady-state system, we establish some a priori upper and lower estimates for positive steady-states, and derive several results for non-existence of positive non-constant steady-states if the diffusion rates are large or small.

  18. Filippov Ratio-Dependent Prey-Predator Model with Threshold Policy Control

    Directory of Open Access Journals (Sweden)

    Xianghong Zhang

    2013-01-01

    Full Text Available The Filippov ratio-dependent prey-predator model with economic threshold is proposed and studied. In particular, the sliding mode domain, sliding mode dynamics, and the existence of four types of equilibria and tangent points are investigated firstly. Further, the stability of pseudoequilibrium is addressed by using theoretical and numerical methods, and also the local sliding bifurcations including regular/virtual equilibrium bifurcations and boundary node bifurcations are studied. Finally, some global sliding bifurcations are addressed numerically. The globally stable touching cycle indicates that the density of pest population can be successfully maintained below the economic threshold level by designing suitable threshold policy strategies.

  19. Quasars Are Not Light-Bulbs: Testing Models of Quasar Lifetimes with the Observed Eddington Ratio Distribution

    CERN Document Server

    Hopkins, Philip F

    2008-01-01

    We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of AGN lifetimes and lightcurves. Given the observed AGN luminosity function, a model for AGN lifetimes (time above a given luminosity) translates directly to a predicted Eddington ratio distribution. Models for self-regulated BH growth, in which feedback produces a 'blowout' decay phase after some peak luminosity (shutting down accretion) make specific predictions for the lifetimes distinct from those expected if AGN are simply gas starved (without feedback) and very different from simple phenomenological 'light bulb' models. Present observations of the Eddington ratio distribution, spanning 5 decades in Eddington ratio, 3 in BH mass, and redshifts z=0-1, agree with the predictions of self-regulated models, and rule out 'light-bulb', pure exponential, and gas starvation models at high significance. We compare the Eddington ratio distributions at fixed BH mass and fixed luminosity (both ...

  20. Single Colour Diagnostics of the Mass-to-light Ratio: Predictions from Galaxy Formation Models

    CERN Document Server

    Wilkins, Stephen M; Baugh, Carlton M; Lacey, Cedric G; Zuntz, Joe

    2013-01-01

    Accurate galaxy stellar masses are crucial to better understand the physical mechanisms driving the galaxy formation process. We use synthetic star formation and metal enrichment histories predicted by the {\\sc galform} galaxy formation model to investigate the precision with which various colours $(m_{a}-m_{b})$ can alone be used as diagnostics of the stellar mass-to-light ratio. As an example, we find that, at $z=0$, the {\\em intrinsic} (B$_{f435w}-$V$_{f606w}$) colour can be used to determine the intrinsic rest-frame $V$-band stellar mass-to-light ratio ($\\log_{10}\\Gamma_{V}=\\log_{10}[(M/M_{\\odot})/(L_{V}/L_{V\\odot})]$) with a precision of $\\sigma_{lg\\Gamma}\\simeq 0.06$ when the initial mass function and redshift are known beforehand. While the presence of dust, assuming a universal attenuation curve, can have a systematic effect on the inferred mass-to-light ratio using a single-colour relation, this is typically small as it is often possible to choose a colour for which the dust reddening vector is appro...

  1. Quantification of the thorax-to-abdomen breathing ratio for breathing motion modeling.

    Science.gov (United States)

    White, Benjamin M; Zhao, Tianyu; Lamb, James; Bradley, Jeffrey D; Low, Daniel A

    2013-06-01

    The purpose of this study was to develop a methodology to quantitatively measure the thorax-to-abdomen breathing ratio from a 4DCT dataset for breathing motion modeling and breathing motion studies. The thorax-to-abdomen breathing ratio was quantified by measuring the rate of cross-sectional volume increase throughout the thorax and abdomen as a function of tidal volume. Twenty-six 16-slice 4DCT patient datasets were acquired during quiet respiration using a protocol that acquired 25 ciné scans at each couch position. Fifteen datasets included data from the neck through the pelvis. Tidal volume, measured using a spirometer and abdominal pneumatic bellows, was used as breathing-cycle surrogates. The cross-sectional volume encompassed by the skin contour when compared for each CT slice against the tidal volume exhibited a nearly linear relationship. A robust iteratively reweighted least squares regression analysis was used to determine η(i), defined as the amount of cross-sectional volume expansion at each slice i per unit tidal volume. The sum Ση(i) throughout all slices was predicted to be the ratio of the geometric expansion of the lung and the tidal volume; 1.11. The Xiphoid process was selected as the boundary between the thorax and abdomen. The Xiphoid process slice was identified in a scan acquired at mid-inhalation. The imaging protocol had not originally been designed for purposes of measuring the thorax-to-abdomen breathing ratio so the scans did not extend to the anatomy with η(i) = 0. Extrapolation of η(i)-η(i) = 0 was used to include the entire breathing volume. The thorax and abdomen regions were individually analyzed to determine the thorax-to-abdomen breathing ratios. There were 11 image datasets that had been scanned only through the thorax. For these cases, the abdomen breathing component was equal to 1.11 - Ση(i) where the sum was taken throughout the thorax. The average Ση(i) for thorax and abdomen image datasets was found to be 1.20

  2. Models of magma-aquifer interactions and their implications for hazard assessment

    Science.gov (United States)

    Strehlow, Karen; Gottsmann, Jo; Tumi Gudmundsson, Magnús

    2014-05-01

    Interactions of magmatic and hydrological systems are manifold, complex and poorly understood. On the one side they bear a significant hazard potential in the form of phreatic explosions or by causing "dry" effusive eruptions to turn into explosive phreatomagmatic events. On the other side, they can equally serve to reduce volcanic risk, as resulting geophysical signals can help to forecast eruptions. It is therefore necessary to put efforts towards answering some outstanding questions regarding magma - aquifer interactions. Our research addresses these problems from two sides. Firstly, aquifers respond to magmatic activity and they can also become agents of unrest themselves. Therefore, monitoring the hydrology can provide a valuable window into subsurface processes in volcanic areas. Changes in temperature and strain conditions, seismic excitation or the injection of magmatic fluids into hydrothermal systems are just a few of the proposed processes induced by magmatic activity that affect the local hydrology. Interpretations of unrest signals as groundwater responses are described for many volcanoes and include changes in water table levels, changes in temperature or composition of hydrothermal waters and pore pressure-induced ground deformation. Volcano observatories can track these hydrological effects for example with potential field investigations or the monitoring of wells. To fully utilise these indicators as monitoring and forecasting tools, however, it is necessary to improve our understanding of the ongoing mechanisms. Our hydrogeophysical study uses finite element analysis to quantitatively test proposed mechanisms of aquifer excitation and the resultant geophysical signals. Secondly, volcanic activity is influenced by the presence of groundwater, including phreatomagmatic and phreatic eruptions. We focus here on phreatic explosions at hydrothermal systems. At least two of these impulsive events occurred in 2013: In August at the Icelandic volcano

  3. [Proportional hazards model of birth intervals among marriage cohorts since the 1960s].

    Science.gov (United States)

    Otani, K

    1987-01-01

    With a view to investigating the possibility of an attitudinal change towards the timing of 1st and 2nd births, proportional hazards model analysis of the 1st and 2nd birth intervals and univariate life table analysis were both carried out. Results showed that love matches and conjugal families immediately after marriage are accompanied by a longer 1st birth interval than others, even after controlling for other independent variables. Marriage cohort analysis also shows a net effect on the relative risk of having a 1st birth. Marriage cohorts since the mid-1960s demonstrate a shorter 1st birth interval than the 1961-63 cohort. With regard to the 2nd birth interval, longer 1st birth intervals, arranged marriages, conjugal families immediately following marriage, and higher ages at 1st marriage of women tended to provoke a longer 2nd birth interval. There is no interaction between the 1st birth interval and marriage cohort. Once other independent variables were controlled, with the exception of the marriage cohorts of the early 1970s, the authors found no effect of marriage cohort on the relative risk of having a 2nd birth. This suggests that an attitudinal change towards the timing of births in this period was mainly restricted to that of a 1st birth. Fluctuations in the 2nd birth interval during the 1970-72 marriage cohort were scrutinized in detail. As a result, the authors found that conjugal families after marriage, wives with low educational status, women with husbands in white collar professions, women with white collar fathers, and wives with high age at 1st marriage who married during 1970-72 and had a 1st birth interval during 1972-74 suffered most from the pronounced rise in the 2nd birth interval. This might be due to the relatively high sensitivity to a change in socioeconomic status; the oil crisis occurring around the time of marriage and 1st birth induced a delay in the 2nd birth. The unanimous decrease in the 2nd birth interval among the 1973

  4. Influence of Climate Change on Flood Hazard using Climate Informed Bayesian Hierarchical Model in Johnson Creek River

    Science.gov (United States)

    Zarekarizi, M.; Moradkhani, H.

    2015-12-01

    Extreme events are proven to be affected by climate change, influencing hydrologic simulations for which stationarity is usually a main assumption. Studies have discussed that this assumption would lead to large bias in model estimations and higher flood hazard consequently. Getting inspired by the importance of non-stationarity, we determined how the exceedance probabilities have changed over time in Johnson Creek River, Oregon. This could help estimate the probability of failure of a structure that was primarily designed to resist less likely floods according to common practice. Therefore, we built a climate informed Bayesian hierarchical model and non-stationarity was considered in modeling framework. Principle component analysis shows that North Atlantic Oscillation (NAO), Western Pacific Index (WPI) and Eastern Asia (EA) are mostly affecting stream flow in this river. We modeled flood extremes using peaks over threshold (POT) method rather than conventional annual maximum flood (AMF) mainly because it is possible to base the model on more information. We used available threshold selection methods to select a suitable threshold for the study area. Accounting for non-stationarity, model parameters vary through time with climate indices. We developed a couple of model scenarios and chose one which could best explain the variation in data based on performance measures. We also estimated return periods under non-stationarity condition. Results show that ignoring stationarity could increase the flood hazard up to four times which could increase the probability of an in-stream structure being overtopped.

  5. Space-time variation of the electron-to-proton mass ratio in a Weyl model

    CERN Document Server

    Landau, Susana J; Bonder, Yuri; Sudarsky, Daniel

    2010-01-01

    We consider a phenomenological model where the effective fermion masses depend on the local value of Weyl tensor as a possible explanation for the recent data indicating a space-time variation of the electron-to-proton mass ratio ($\\Delta \\mu/\\mu$) within the Milky Way. We also contrast the required value of the model's parameters with the bounds obtained for the same quantity from modern tests on the violation of the Weak Equivalence Principle (WEP). We obtain the theoretical expression for the variation of $\\Delta \\mu/\\mu$ and for the violation of the WEP as a function of the model parameters. We perform a least square minimization in order to obtain constraints on the model parameters from bounds on the WEP. The bounds obtained on the model parameters from the variation of $\\Delta \\mu/\\mu$ are inconsistent with the bounds obtained from constraints on the violation of the WEP. The variation of nucleon and electron masses through the Weyl tensor is not a viable model.

  6. Spatiotemporal Patterns in a Ratio-Dependent Food Chain Model with Reaction-Diffusion

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2014-01-01

    Full Text Available Predator-prey models describe biological phenomena of pursuit-evasion interaction. And this interaction exists widely in the world for the necessary energy supplement of species. In this paper, we have investigated a ratio-dependent spatially extended food chain model. Based on the bifurcation analysis (Hopf and Turing, we give the spatial pattern formation via numerical simulation, that is, the evolution process of the system near the coexistence equilibrium point (u2*,v2*,w2*, and find that the model dynamics exhibits complex pattern replication. For fixed parameters, on increasing the control parameter c1, the sequence “holes → holes-stripe mixtures → stripes → spots-stripe mixtures → spots” pattern is observed. And in the case of pure Hopf instability, the model exhibits chaotic wave pattern replication. Furthermore, we consider the pattern formation in the case of which the top predator is extinct, that is, the evolution process of the system near the equilibrium point (u1*,v1*,0, and find that the model dynamics exhibits stripes-spots pattern replication. Our results show that reaction-diffusion model is an appropriate tool for investigating fundamental mechanism of complex spatiotemporal dynamics. It will be useful for studying the dynamic complexity of ecosystems.

  7. Biases in modeled surface snow BC mixing ratios in prescribed-aerosol climate model runs

    OpenAIRE

    Doherty, S. J.; C. M. Bitz; M. G. Flanner

    2014-01-01

    Black carbon (BC) in snow lowers its albedo, increasing the absorption of sunlight, leading to positive radiative forcing, climate warming and earlier snowmelt. A series of recent studies have used prescribed-aerosol deposition flux fields in climate model runs to assess the forcing by black carbon in snow. In these studies, the prescribed mass deposition flux of BC to surface snow is decoupled from the mass deposition flux of snow water to the surface. Here we compare progn...

  8. Signal-to-noise ratio, contrast-to-noise ratio and pharmacokinetic modeling considerations in dynamic contrast-enhanced magnetic resonance imaging.

    Science.gov (United States)

    Li, Xin; Huang, Wei; Rooney, William D

    2012-11-01

    With advances in magnetic resonance imaging (MRI) technology, dynamic contrast-enhanced (DCE)-MRI is approaching the capability to simultaneously deliver both high spatial and high temporal resolutions for clinical applications. However, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) considerations and their impacts regarding pharmacokinetic modeling of the time-course data continue to represent challenges in the design of DCE-MRI acquisitions. Given that many acquisition parameters can affect the nature of DCE-MRI data, minimizing tissue-specific data acquisition discrepancy (among sites and scanner models) is as important as synchronizing pharmacokinetic modeling approaches. For cancer-related DCE-MRI studies where rapid contrast reagent (CR) extravasation is expected, current DCE-MRI protocols often adopt a three-dimensional fast low-angle shot (FLASH) sequence to achieve spatial-temporal resolution requirements. Based on breast and prostate DCE-MRI data acquired with different FLASH sequence parameters, this paper elucidates a number of SNR and CNR considerations for acquisition optimization and pharmacokinetic modeling implications therein. Simulations based on region of interest data further indicate that the effects of intercompartmental water exchange often play an important role in DCE time-course data modeling, especially for protocols optimized for post-CR SNR.

  9. Modeling retrospective attribution of responsibility to hazard-managing institutions: an example involving a food contamination incident.

    Science.gov (United States)

    Johnson, Branden B; Hallman, William K; Cuite, Cara L

    2015-03-01

    Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.

  10. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    Science.gov (United States)

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  11. The application of a calibrated 3D ballistic trajectory model to ballistic hazard assessments at Upper Te Maari, Tongariro

    Science.gov (United States)

    Fitzgerald, R. H.; Tsunematsu, K.; Kennedy, B. M.; Breard, E. C. P.; Lube, G.; Wilson, T. M.; Jolly, A. D.; Pawson, J.; Rosenberg, M. D.; Cronin, S. J.

    2014-10-01

    On 6 August, 2012, Upper Te Maari Crater, Tongariro volcano, New Zealand, erupted for the first time in over one hundred years. Multiple vents were activated during the hydrothermal eruption, ejecting blocks up to 2.3 km and impacting ~ 2.6 km of the Tongariro Alpine Crossing (TAC) hiking track. Ballistic impact craters were mapped to calibrate a 3D ballistic trajectory model for the eruption. This was further used to inform future ballistic hazard. Orthophoto mapping revealed 3587 impact craters with a mean diameter of 2.4 m. However, field mapping of accessible regions indicated an average of at least four times more observable impact craters and a smaller mean crater diameter of 1.2 m. By combining the orthophoto and ground-truthed impact frequency and size distribution data, we estimate that approximately 13,200 ballistic projectiles were generated during the eruption. The 3D ballistic trajectory model and a series of inverse models were used to constrain the eruption directions, angles and velocities. When combined with eruption observations and geophysical observations, the model indicates that the blocks were ejected in five variously directed eruption pulses, in total lasting 19 s. The model successfully reproduced the mapped impact distribution using a mean initial particle velocity of 200 m/s with an accompanying average gas flow velocity over a 400 m radius of 150 m/s. We apply the calibrated model to assess ballistic hazard from the August eruption along the TAC. By taking the field mapped spatial density of impacts and an assumption that an average ballistic impact will cause serious injury or death (casualty) over an 8 m2 area, we estimate that the probability of casualty ranges from 1% to 16% along the affected track (assuming an eruption during the time of exposure). Future ballistic hazard and probabilities of casualty along the TAC are also assessed through application of the calibrated model. We model a magnitude larger eruption and illustrate

  12. GIS-Based Spatial Analysis and Modeling for Landslide Hazard Assessment: A Case Study in Upper Minjiang River Basin

    Institute of Scientific and Technical Information of China (English)

    FENG Wenlan; ZHOU Qigang; ZHANG Baolei; ZHOU Wancun; LI Ainong; ZHANG Haizhen; XIAN Wei

    2006-01-01

    By analyzing the topographic features of past landslides since 1980s and the main land-cover types (including change information) in landslide-prone area, modeled spatial distribution of landslide hazard in upper Minjiang River Basin was studied based on spatial analysis of GIS in this paper. Results of GIS analysis showed that landslide occurrence in this region closely related to topographic feature. Most areas with high hazard probability were deep-sheared gorge. Most of them in investigation occurred assembly in areas with elevation lower than 3 000 m, due to fragile topographic conditions and intensive human disturbances. Land-cover type, including its change information, was likely an important environmental factor to trigger landslide. Destroy of vegetation driven by increase of population and its demands augmented the probability of landslide in steep slope.

  13. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  14. 信用传染违约Aalen加性风险模型%Credit Contagion Default Aalen Additive Hazard Model

    Institute of Scientific and Technical Information of China (English)

    田军; 周勇

    2012-01-01

    In this paper we consider the credit risk default models based on additive hazard model. Not only do we incorporate the macroeconomic and firm-specific conditions, but also by introducing industry-specific covariates we characterize the credit contagion between industries. In this way, we overcome underestimating by models before. We provide maximum likelihood estimators and their asymptotic properties for the parametric additive hazard model. Two estimating methods are considered, and then we get that optimal weight estimate is more effective. This paper also consider the semi-parametric additive hazard model, under this model we provide estimators and their asymptotic properties based on estimating equations of martingale. Finally we get good results through simulation.%本文考虑了基于加性风险模型的信用风险违约预报模型,不但考虑了宏观因素和公司个体因素,并且通过引入行业因素来刻画公司间可能存在的不同于宏观因素的信用传染效应,由此克服了以往模型对违约相关性的低估.本文在参数加性风险模型下给出极大似然估计及渐近性,提出两种估计方法并比较二者表现,得到最优权估计更加有效.同时本文还考虑了半参数的风险模型,并基于鞅的估计方程得到其估计及渐近性,均得到不错的结果.

  15. Patterns formations in a diffusive ratio-dependent predator-prey model of interacting populations

    Science.gov (United States)

    Camara, B. I.; Haque, M.; Mokrani, H.

    2016-11-01

    The present investigation deals with the analysis of the spatial pattern formation of a diffusive predator-prey system with ratio-dependent functional response involving the influence of intra-species competition among predators within two-dimensional space. The appropriate condition of Turing instability around the interior equilibrium point of the present model has been determined. The emergence of complex patterns in the diffusive predator-prey model is illustrated through numerical simulations. These results are based on the existence of bifurcations of higher codimension such as Turing-Hopf, Turing-Saddle-node, Turing-Transcritical bifurcation, and the codimension- 3 ​Turing-Takens-Bogdanov bifurcation. The paper concludes with discussions of our results in ecology.

  16. Cognitive theories as reinforcement history surrogates: the case of likelihood ratio models of human recognition memory.

    Science.gov (United States)

    Wixted, John T; Gaitan, Santino C

    2002-11-01

    B. F. Skinner (1977) once argued that cognitive theories are essentially surrogates for the organism's (usually unknown) reinforcement history. In this article, we argue that this notion applies rather directly to a class of likelihood ratio models of human recognition memory. The point is not that such models are fundamentally flawed or that they are not useful and should be abandoned. Instead, the point is that the role of reinforcement history in shaping memory decisions could help to explain what otherwise must be explained by assuming that subjects are inexplicably endowed with the relevant distributional information and computational abilities. To the degree that a role for an organism's reinforcement history is appreciated, the importance of animal memory research in understanding human memory comes into clearer focus. As Skinner was also fond of pointing out, it is only in the animal laboratory that an organism's history of reinforcement can be precisely controlled and its effects on behavior clearly understood.

  17. Congestion Control in the Internet by Employing a Ratio dependent Plant Herbivore Carnivorous Model

    CERN Document Server

    Jamali, Shahram

    2009-01-01

    The demand for Internet based services has exploded over the last decade. Many organizations use the Internet and particularly the World Wide Web as their primary medium for communication and business. This phenomenal growth has dramatically increased the performance requirements for the Internet. To have a high performance Internet, a good congestion control system is essential for it. The current work proposes that the congestion control in the Internet can be inspired from the population control tactics of the nature. Toward this idea, each flow (W) in the network is viewed as a species whose population size is congestion window size of the flow. By this assumption, congestion control problem is redefined as population control of flow species. This paper defines a three trophic food chain analogy in congestion control area, and gives a ratio dependent model to control population size of W species within this plant herbivore carnivorous food chain. Simulation results show that this model achieves fair bandw...

  18. An SIRS Epidemic Model with Vital Dynamics and a Ratio-Dependent Saturation Incidence Rate

    Directory of Open Access Journals (Sweden)

    Xinli Wang

    2015-01-01

    Full Text Available This paper presents an investigation on the dynamics of an epidemic model with vital dynamics and a nonlinear incidence rate of saturated mass action as a function of the ratio of the number of the infectives to that of the susceptibles. The stabilities of the disease-free equilibrium and the endemic equilibrium are first studied. Under the assumption of nonexistence of periodic solution, the global dynamics of the model is established: either the number of infective individuals tends to zero as time evolves or it produces bistability in which there is a region such that the disease will persist if the initial position lies in the region and disappears if the initial position lies outside this region. Computer simulation shows such results.

  19. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J

    2008-02-11

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.

  20. Hazard Identification of the Offshore Three-phase Separation Process Based on Multilevel Flow Modeling and HAZOP

    DEFF Research Database (Denmark)

    Wu, Jing; Zhang, Laibin; Lind, Morten;

    2013-01-01

    HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of the systems. Different tools have been developed to automate HAZOP studies. In this paper, a HAZOP reasoning method based...... on function-oriented modeling, Multilevel Flow Modeling (MFM), is extended with function roles. A graphical MFM editor, which is combined with the reasoning capabilities of the MFM Workbench developed by DTU is applied to automate HAZOP studies. The method is proposed to support the “brain-storming” sessions...

  1. The null distribution of likelihood-ratio statistics in the conditional-logistic linkage model.

    Science.gov (United States)

    Song, Yeunjoo E; Elston, Robert C

    2013-01-01

    Olson's conditional-logistic model retains the nice property of the LOD score formulation and has advantages over other methods that make it an appropriate choice for complex trait linkage mapping. However, the asymptotic distribution of the conditional-logistic likelihood-ratio (CL-LR) statistic with genetic constraints on the model parameters is unknown for some analysis models, even in the case of samples comprising only independent sib pairs. We derive approximations to the asymptotic null distributions of the CL-LR statistics and compare them with the empirical null distributions by simulation using independent affected sib pairs. Generally, the empirical null distributions of the CL-LR statistics match well the known or approximated asymptotic distributions for all analysis models considered except for the covariate model with a minimum-adjusted binary covariate. This work will provide useful guidelines for linkage analysis of real data sets for the genetic analysis of complex traits, thereby contributing to the identification of genes for disease traits.

  2. The ultimate signal-to-noise ratio in realistic body models.

    Science.gov (United States)

    Guérin, Bastien; Villena, Jorge F; Polimeridis, Athanasios G; Adalsteinsson, Elfar; Daniel, Luca; White, Jacob K; Wald, Lawrence L

    2016-12-04

    We compute the ultimate signal-to-noise ratio (uSNR) and G-factor (uGF) in a realistic head model from 0.5 to 21 Tesla. We excite the head model and a uniform sphere with a large number of electric and magnetic dipoles placed at 3 cm from the object. The resulting electromagnetic fields are computed using an ultrafast volume integral solver, which are used as basis functions for the uSNR and uGF computations. Our generalized uSNR calculation shows good convergence in the sphere and the head and is in close agreement with the dyadic Green's function approach in the uniform sphere. In both models, the uSNR versus B0 trend was linear at shallow depths and supralinear at deeper locations. At equivalent positions, the rate of increase of the uSNR with B0 was greater in the sphere than in the head model. The uGFs were lower in the realistic head than in the sphere for acceleration in the anterior-posterior direction, but similar for the left-right direction. The uSNR and uGFs are computable in nonuniform body models and provide fundamental performance limits for human imaging with close-fitting MRI array coils. Magn Reson Med, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Relative numerosity discrimination in the pigeon: further tests of the linear-exponential-ratio model.

    Science.gov (United States)

    Machado, Armando; Keen, Richard

    2002-04-28

    This study tested a model of how animals discriminate the relative numerosity of stimuli in successive or sequential presentation tasks. In a discrete-trials procedure, pigeons were shown one light for nf times and then another for nl times. Next they received food for choosing the light that had occurred the least-number of times during the sample. At issue were (a) how performance varies with the interval between the two stimulus sets (the interblock interval) and the interval between the end of the sample and the beginning of the choice period (the retention interval); and (b) whether a simple mathematical model of the discrimination process could account for the data. The model assumed that the influence of a stimulus on choice increases linearly when the stimulus is presented, but decays exponentially when the stimulus is absent; choice probability is given by the ratio of the influence values of the two stimuli. The model also assumed that as the retention interval elapses there is an increasing probability that the ongoing discriminative process be disrupted and then the animal responds randomly. Results showed that increasing the interblock intervals reduced the probability of choosing the last stimulus of the sample as the least-frequent one. Increasing the retention interval reduced accuracy without inducing any stimulus bias. The model accounted well for the major trends in the data.

  4. Quality Model and Artificial Intelligence Base Fuel Ratio Management with Applications to Automotive Engine

    Directory of Open Access Journals (Sweden)

    Mojdeh Piran

    2014-01-01

    Full Text Available In this research, manage the Internal Combustion (IC engine modeling and a multi-input-multi-output artificial intelligence baseline chattering free sliding mode methodology scheme is developed with guaranteed stability to simultaneously control fuel ratios to desired levels under various air flow disturbances by regulating the mass flow rates of engine PFI and DI injection systems. Modeling of an entire IC engine is a very important and complicated process because engines are nonlinear, multi inputs-multi outputs and time variant. One purpose of accurate modeling is to save development costs of real engines and minimizing the risks of damaging an engine when validating controller designs. Nevertheless, developing a small model, for specific controller design purposes, can be done and then validated on a larger, more complicated model. Analytical dynamic nonlinear modeling of internal combustion engine is carried out using elegant Euler-Lagrange method compromising accuracy and complexity. A baseline estimator with varying parameter gain is designed with guaranteed stability to allow implementation of the proposed state feedback sliding mode methodology into a MATLAB simulation environment, where the sliding mode strategy is implemented into a model engine control module (“software”. To estimate the dynamic model of IC engine fuzzy inference engine is applied to baseline sliding mode methodology. The fuzzy inference baseline sliding methodology performance was compared with a well-tuned baseline multi-loop PID controller through MATLAB simulations and showed improvements, where MATLAB simulations were conducted to validate the feasibility of utilizing the developed controller and state estimator for automotive engines. The proposed tracking method is designed to optimally track the desired FR by minimizing the error between the trapped in-cylinder mass and the product of the desired FR and fuel mass over a given time interval.

  5. Modeling Flood Hazard Zones at the Sub-District Level with the Rational Model Integrated with GIS and Remote Sensing Approaches

    Directory of Open Access Journals (Sweden)

    Daniel Asare-Kyei

    2015-07-01

    Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.

  6. Spatial Modelling of Urban Physical Vulnerability to Explosion Hazards Using GIS and Fuzzy MCDA

    OpenAIRE

    Yasser Ebrahimian Ghajari; Ali Asghar Alesheikh; Mahdi Modiri; Reza Hosnavi; Morteza Abbasi

    2017-01-01

    Most of the world’s population is concentrated in accumulated spaces in the form of cities, making the concept of urban planning a significant issue for consideration by decision makers. Urban vulnerability is a major issue which arises in urban management, and is simply defined as how vulnerable various structures in a city are to different hazards. Reducing urban vulnerability and enhancing resilience are considered to be essential steps towards achieving urban sustainability. To date, a va...

  7. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  8. Gaussian mixture models for measuring local change down-track in LWIR imagery for explosive hazard detection

    Science.gov (United States)

    Spain, Christopher J.; Anderson, Derek T.; Keller, James M.; Popescu, Mihail; Stone, Kevin E.

    2011-06-01

    Burying objects below the ground can potentially alter their thermal properties. Moreover, there is often soil disturbance associated with recently buried objects. An intensity video frame image generated by an infrared camera in the medium and long wavelengths often locally varies in the presence of buried explosive hazards. Our approach to automatically detecting these anomalies is to estimate a background model of the image sequence. Pixel values that do not conform to the background model may represent local changes in thermal or soil signature caused by buried objects. Herein, we present a Gaussian mixture model-based technique to estimate the statistical model of background pixel values. The background model is used to detect anomalous pixel values on the road while a vehicle is moving. Foreground pixel confidence values are projected into the UTM coordinate system and a UTM confidence map is built. Different operating levels are explored and the connected component algorithm is then used to extract islands that are subjected to size, shape and orientation filters. We are currently using this approach as a feature in a larger multi-algorithm fusion system. However, in this article we also present results for using this algorithm as a stand-alone detector algorithm in order to further explore its value in detecting buried explosive hazards.

  9. Landslide Hazard Assessment and Mapping in the Guil Catchment (Queyras, Southern French Alps): From Landslide Inventory to Susceptibility Modelling

    Science.gov (United States)

    Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique

    2016-04-01

    Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling

  10. Modeling the effects of distortion, contrast, and signal-to-noise ratio on stereophotogrammetric range mapping

    Science.gov (United States)

    Sellar, R. Glenn; Deen, Robert G.; Huffman, William C.; Willson, Reginald G.

    2016-09-01

    Stereophotogrammetry typically employs a pair of cameras, or a single moving camera, to acquire pairs of images from different camera positions, in order to create a three dimensional `range map' of the area being observed. Applications of this technique for building three-dimensional shape models include aerial surveying, remote sensing, machine vision, and robotics. Factors that would be expected to affect the quality of the range maps include the projection function (distortion) of the lenses and the contrast (modulation) and signal-to-noise ratio (SNR) of the acquired image pairs. Basic models of the precision with which the range can be measured assume a pinhole-camera model of the geometry, i.e. that the lenses provide perspective projection with zero distortion. Very-wide-angle or `fisheye' lenses, however (for e.g. those used by robotic vehicles) typically exhibit projection functions that differ significantly from this assumption. To predict the stereophotogrammetric range precision for such applications, we extend the model to the case of an equidistant lens projection function suitable for a very-wide-angle lens. To predict the effects of contrast and SNR on range precision, we perform numerical simulations using stereo image pairs acquired by a stereo camera pair on NASA's Mars rover Curiosity. Contrast is degraded and noise is added to these data in a controlled fashion and the effects on the quality of the resulting range maps are assessed.

  11. A hazard-based duration model for analyzing crossing behavior of cyclists and electric bike riders at signalized intersections.

    Science.gov (United States)

    Yang, Xiaobao; Huan, Mei; Abdel-Aty, Mohamed; Peng, Yichuan; Gao, Ziyou

    2015-01-01

    This paper presents a hazard-based duration approach to investigate riders' waiting times, violation hazards, associated risk factors, and their differences between cyclists and electric bike riders at signalized intersections. A total of 2322 two-wheeled riders approaching the intersections during red light periods were observed in Beijing, China. The data were classified into censored and uncensored data to distinguish between safe crossing and red-light running behavior. The results indicated that the red-light crossing behavior of most riders was dependent on waiting time. They were inclined to terminate waiting behavior and run against the traffic light with the increase of waiting duration. Over half of the observed riders cannot endure 49s or longer. 25% of the riders can endure 97s or longer. Rider type, gender, waiting position, conformity tendency and crossing traffic volume were identified to have significant effects on riders' waiting times and violation hazards. Electric bike riders were found to be more sensitive to the external risk factors such as other riders' crossing behavior and crossing traffic volume than cyclists. Moreover, unobserved heterogeneity was examined in the proposed models. The finding of this paper can explain when and why cyclists and electric bike riders run against the red light at intersections. The results of this paper are useful for traffic design and management agencies to implement strategies to enhance the safety of riders.

  12. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-01-25

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Seismic hazard of the Kivu rift (western branch, East African Rift system): new neotectonic map and seismotectonic zonation model

    Science.gov (United States)

    Delvaux, Damien; Mulumba, Jean-Luc; Sebagenzi Mwene Ntabwoba, Stanislas; Fiama Bondo, Silvanos; Kervyn, François; Havenith, Hans-Balder

    2017-04-01

    The first detailed probabilistic seismic hazard assessment has been performed for the Kivu and northern Tanganyika rift region in Central Africa. This region, which forms the central part of the Western Rift Branch, is one of the most seismically active part of the East African rift system. It was already integrated in large scale seismic hazard assessments, but here we defined a finer zonation model with 7 different zones representing the lateral variation of the geological and geophysical setting across the region. In order to build the new zonation model, we compiled homogeneous cross-border geological, neotectonic and sismotectonic maps over the central part of East D.R. Congo, SW Uganda, Rwanda, Burundi and NW Tanzania and defined a new neotectonic sheme. The seismic risk assessment is based on a new earthquake catalogue, compiled on the basis of various local and global earthquake catalogues. The use of macroseismic epicenters determined from felt earthquakes allowed to extend the time-range back to the beginning of the 20th century, spanning 126 years, with 1068 events. The magnitudes have been homogenized to Mw and aftershocks removed. From this initial catalogue, a catalogue of 359 events from 1956 to 2015 and with M > 4.4 has been extracted for the seismic hazard assessment. The seismotectonic zonation includes 7 seismic source areas that have been defined on the basis of the regional geological structure, neotectonic fault systems, basin architecture and distribution of thermal springs and earthquake epicenters. The Gutenberg-Richter seismic hazard parameters were determined using both the least square linear fit and the maximum likelihood method (Kijko & Smit aue program). Seismic hazard maps have been computed with the Crisis 2012 software using 3 different attenuation laws. We obtained higher PGA values (475 years return period) for the Kivu rift region than the previous estimates (Delvaux et al., 2016). They vary laterally in function of the tectonic

  14. A spatial hazard model for cluster detection on continuous indicators of disease: application to somatic cell score.

    Science.gov (United States)

    Gay, Emilie; Senoussi, Rachid; Barnouin, Jacques

    2007-01-01

    Methods for spatial cluster detection dealing with diseases quantified by continuous variables are few, whereas several diseases are better approached by continuous indicators. For example, subclinical mastitis of the dairy cow is evaluated using a continuous marker of udder inflammation, the somatic cell score (SCS). Consequently, this study proposed to analyze spatialized risk and cluster components of herd SCS through a new method based on a spatial hazard model. The dataset included annual SCS for 34 142 French dairy herds for the year 2000, and important SCS risk factors: mean parity, percentage of winter and spring calvings, and herd size. The model allowed the simultaneous estimation of the effects of known risk factors and of potential spatial clusters on SCS, and the mapping of the estimated clusters and their range. Mean parity and winter and spring calvings were significantly associated with subclinical mastitis risk. The model with the presence of 3 clusters was highly significant, and the 3 clusters were attractive, i.e. closeness to cluster center increased the occurrence of high SCS. The three localizations were the following: close to the city of Troyes in the northeast of France; around the city of Limoges in the center-west; and in the southwest close to the city of Tarbes. The semi-parametric method based on spatial hazard modeling applies to continuous variables, and takes account of both risk factors and potential heterogeneity of the background population. This tool allows a quantitative detection but assumes a spatially specified form for clusters.

  15. The power ratio and the interval map spiking models and extracellular data

    CERN Document Server

    Reich, D S; Knight, B W; Reich, Daniel S.; Victor, Jonathan D.; Knight, Bruce W.

    1998-01-01

    We describe a new, computationally simple method for analyzing the dynamics of neuronal spike trains driven by external stimuli. The goal of our method is to test the predictions of simple spike-generating models against extracellularly recorded neuronal responses. Through a new statistic called the power ratio, we distinguish between two broad classes of responses: (1) responses that can be completely characterized by a variable firing rate, (for example, modulated Poisson and gamma spike trains); and (2) responses for which firing rate variations alone are not sufficient to characterize response dynamics (for example, leaky integrate-and-fire spike trains as well as Poisson spike trains with long absolute refractory periods). We show that the responses of many visual neurons in the cat retinal ganglion, cat lateral geniculate nucleus, and macaque primary visual cortex fall into the second class, which implies that the pattern of spike times can carry significant information about visual stimuli. Our results...

  16. Mesomechanical model and analysis of an artificial muscle functioning: role of Poisson’s ratio

    Science.gov (United States)

    Shil'ko, Serge; Chernous, Dmitry; Basinyuk, Vladimir

    2016-05-01

    The mechanism of force generation in a polymer monofilament actuator element with auxetic characteristics is modeled to assess the development and the optimization of a controlled drive based on the use of electrostrictive polymers. The monofilament is considered as a viscoelastic rod. By assuming a ‘sliding thread’ deformation occurring within the system, the variation of the monofilament length during the uniform contraction and force generated during a uniaxial mode of actuation have been obtained. The distribution of the axial stress was determined along the length of the monofilament at various stages during the uniform contraction. The rate of contraction reaches a maximum, together with a minimum of the stress intensity when the equivalent Poisson’s ratio of the actuator is negative.

  17. Rapid SAR and GPS Measurements and Models for Hazard Science and Situational Awareness

    Science.gov (United States)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Moore, A. W.; Rosen, P. A.; Simons, M.; Webb, F.; Linick, J.; Fielding, E. J.; Lundgren, P.; Sacco, G. F.; Polet, J.; Manipon, G.

    2016-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR), Differential Global Positioning System (DGPS), SAR-based change detection, and image pixel tracking have recently become critical additions to our toolset for understanding and mapping the damage caused by earthquakes, volcanic eruptions, landslides, and floods. Analyses of these data sets are still largely handcrafted following each event and are not generated rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition, the ARIA project is developing the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the imminent increase in raw data from geodetic imaging missions planned for launch by NASA, as well as international space agencies. We will present the progress we have made on automating the analysis of SAR data for hazard monitoring and response using data from Sentinel 1a/b as well as continuous GPS stations. Since the beginning of our project, our team has imaged events and generated response products for events around the world. These response products have enabled many conversations with those in the disaster response community

  18. On the development of a seismic source zonation model for seismic hazard assessment in western Saudi Arabia

    Science.gov (United States)

    Zahran, Hani M.; Sokolov, Vladimir; Roobol, M. John; Stewart, Ian C. F.; El-Hadidy Youssef, Salah; El-Hadidy, Mahmoud

    2016-07-01

    A new seismic source model has been developed for the western part of the Arabian Peninsula, which has experienced considerable earthquake activity in the historical past and in recent times. The data used for the model include an up-to-date seismic catalog, results of recent studies of Cenozoic faulting in the area, aeromagnetic anomaly and gravity maps, geological maps, and miscellaneous information on volcanic activity. The model includes 18 zones ranging along the Red Sea and the Arabian Peninsula from the Gulf of Aqaba and the Dead Sea in the north to the Gulf of Aden in the south. The seismic source model developed in this study may be considered as one of the basic branches in a logic tree approach for seismic hazard assessment in Saudi Arabia and adjacent territories.

  19. Planning ahead for asteroid and comet hazard mitigation, phase 1: parameter space exploration and scenario modeling

    Energy Technology Data Exchange (ETDEWEB)

    Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory

    2009-01-01

    The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.

  20. Modeling hydrologic and geomorphic hazards across post-fire landscapes using a self-organizing map approach

    Science.gov (United States)

    Friedel, Michael J.

    2011-01-01

    Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.

  1. A Kinematic Fault Network Model of Crustal Deformation for California and Its Application to the Seismic Hazard Analysis

    Science.gov (United States)

    Zeng, Y.; Shen, Z.; Harmsen, S.; Petersen, M. D.

    2010-12-01

    We invert GPS observations to determine the slip rates on major faults in California based on a kinematic fault model of crustal deformation with geological slip rate constraints. Assuming an elastic half-space, we interpret secular surface deformation using a kinematic fault network model with each fault segment slipping beneath a locking depth. This model simulates both block-like deformation and elastic strain accumulation within each bounding block. Each fault segment is linked to its adjacent elements with slip continuity imposed at fault nodes or intersections. The GPS observations across California and its neighbors are obtained from the SCEC WGCEP project of California Crustal Motion Map version 1.0 and SCEC Crustal Motion Map 4.0. Our fault models are based on the SCEC UCERF 2.0 fault database, a previous southern California block model by Shen and Jackson, and the San Francisco Bay area block model by d’Alessio et al. Our inversion shows a slip rate ranging from 20 to 26 mm/yr for the northern San Andreas from the Santa Cruz Mountain to the Peninsula segment. Slip rates vary from 8 to 14 mm/yr along the Hayward to the Maacama segment, and from 17 to 6 mm/yr along the central Calaveras to West Napa. For the central California creeping section, we find a depth dependent slip rate with an average slip rate of 23 mm/yr across the upper 5 km and 30 mm/yr underneath. Slip rates range from 30 mm/yr along the Parkfield and central California creeping section of the San Andres to an average of 6 mm/yr on the San Bernardino Mountain segment. On the southern San Andreas, slip rates vary from 21 to 30 mm/yr from the Cochella Valley to the Imperial Valley, and from 7 to 16 mm/yr along the San Jacinto segments. The shortening rate across the greater Los Angeles region is consistent with the regional tectonics and crustal thickening in the area. We are now in the process of applying the result to seismic hazard evaluation. Overall the geodetic and geological derived

  2. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Science.gov (United States)

    Joshi, J. C.; Tankeshwar, K.; Srivastava, Sunita

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992-2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed using Forward and Viterbi algorithms, respectively. Baum-Welch algorithm has been used for optimizing the model parameters. The model has been validated for two winters (2012-2013 and 2013-2014) by computing root mean square error (RMSE), accuracy measures such as percent correct (PC), critical success index (CSI) and Heidke skill score (HSS). The RMSE of the model has also been calculated using leave-one-out cross-validation method. Snowfall predicted by the model during hazardous snowfall events in different parts of the Himalaya matches well with the observed one. The HSS of the model for all the stations implies that the optimized model has better forecasting skill than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days.

  3. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Indian Academy of Sciences (India)

    J C Joshi; K Tankeshwar; Sunita Srivastava

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed using Forward and Viterbi algorithms, respectively. Baum–Welch algorithm has been used for optimizing the model parameters. The model has been validated for two winters (2012–2013 and 2013–2014) by computing root mean square error (RMSE), accuracy measures such as percent correct (PC), critical success index (CSI) and Heidke skill score (HSS). The RMSE of the model has also been calculated using leave-one-out cross-validation method. Snowfall predicted by the model during hazardous snowfall events in different parts of the Himalaya matches well with the observed one. The HSS of the model for all the stations implies that the optimized model has better forecasting skill than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days.

  4. Hopf and steady state bifurcation analysis in a ratio-dependent predator-prey model

    Science.gov (United States)

    Zhang, Lai; Liu, Jia; Banerjee, Malay

    2017-03-01

    In this paper, we perform spatiotemporal bifurcation analysis in a ratio-dependent predator-prey model and derive explicit conditions for the existence of non-constant steady states that emerge through steady state bifurcation from related constant steady states. These explicit conditions are numerically verified in details and further compared to those conditions ensuring Turing instability. We find that (1) Turing domain is identical to the parametric domain where there exists only steady state bifurcation, which implies that Turing patterns are stable non-constant steady states, but the opposite is not necessarily true; (2) In non-Turing domain, steady state bifurcation and Hopf bifurcation act in concert to determine the emergent spatial patterns, that is, non-constant steady state emerges through steady state bifurcation but it may be unstable if the destabilising effect of Hopf bifurcation counteracts the stabilising effect of diffusion, leading to non-stationary spatial patterns; (3) Coupling diffusion into an ODE model can significantly enrich population dynamics by inducing alternative non-constant steady states (four different states are observed, two stable and two unstable), in particular when diffusion interacts with different types of bifurcation; (4) Diffusion can promote species coexistence by saving species which otherwise goes to extinction in the absence of diffusion.

  5. Adaptation of fugacity models to treat speciating chemicals with constant species concentration ratios.

    Science.gov (United States)

    Toose, Liisa K; Mackay, Donald

    2004-09-01

    A "multiplier" method is developed by which multimedia mass balance fugacity models designed to describe the fate of a single chemical species can be applied to chemicals that exist as several interconverting species. The method is applicable only when observed ratios of species concentrations in each phase are relatively constant and there is thus no need to define interspecies conversion rates. It involves the compilation of conventional transformation and intermedia transport rate expressions for a single, selected key species, and then a multiplier, Ri, is deduced for each of the other species. The total rate applicable to all species is calculated as the product of the rate for the single key species and a combined multiplier (1 + R2 + R3 + etc.). The theory is developed and illustrated by two examples. Limitations of the method are discussed, especially under conditions when conversion rates are uncertain. The advantage of this approach is that existing fugacity and concentration-based models that describe the fate of single-species chemicals can be readily adapted to estimate the fate of multispecies substances such as mercury which display relatively constant species proportions in each medium.

  6. Improved analytic extreme-mass-ratio inspiral model for scoping out eLISA data analysis

    CERN Document Server

    Chua, Alvin J K

    2015-01-01

    The space-based gravitational-wave detector eLISA has been selected as the ESA L3 mission, and the mission design will be finalised by the end of this decade. To prepare for mission formulation over the next few years, several outstanding and urgent questions in data analysis will be addressed using mock data challenges, informed by instrument measurements from the LISA Pathfinder satellite launching at the end of 2015. These data challenges will require accurate and computationally affordable waveform models for anticipated sources such as the extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes. Previous data challenges have made use of the well-known analytic EMRI waveforms of Barack and Cutler, which are extremely quick to generate but dephase relative to more accurate waveforms within hours, due to their mismatched radial, polar and azimuthal frequencies. In this paper, we describe an augmented Barack-Cutler model that uses a frequency map to the correct Kerr freq...

  7. Modeling high signal-to-noise ratio in a novel silicon MEMS microphone with comb readout

    Science.gov (United States)

    Manz, Johannes; Dehe, Alfons; Schrag, Gabriele

    2017-05-01

    Strong competition within the consumer market urges the companies to constantly improve the quality of their devices. For silicon microphones excellent sound quality is the key feature in this respect which means that improving the signal-to-noise ratio (SNR), being strongly correlated with the sound quality is a major task to fulfill the growing demands of the market. MEMS microphones with conventional capacitive readout suffer from noise caused by viscous damping losses arising from perforations in the backplate [1]. Therefore, we conceived a novel microphone design based on capacitive read-out via comb structures, which is supposed to show a reduction in fluidic damping compared to conventional MEMS microphones. In order to evaluate the potential of the proposed design, we developed a fully energy-coupled, modular system-level model taking into account the mechanical motion, the slide film damping between the comb fingers, the acoustic impact of the package and the capacitive read-out. All submodels are physically based scaling with all relevant design parameters. We carried out noise analyses and due to the modular and physics-based character of the model, were able to discriminate the noise contributions of different parts of the microphone. This enables us to identify design variants of this concept which exhibit a SNR of up to 73 dB (A). This is superior to conventional and at least comparable to high-performance variants of the current state-of-the art MEMS microphones [2].

  8. Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.

    Science.gov (United States)

    Brase, Gary L; Walker, Gary

    2004-06-01

    Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.

  9. Fatigue Modeling for Superelastic NiTi Considering Cyclic Deformation and Load Ratio Effects

    Science.gov (United States)

    Mahtabi, Mohammad J.; Shamsaei, Nima

    2017-08-01

    A cumulative energy-based damage model, called total fatigue toughness, is proposed for fatigue life prediction of superelastic NiTi alloys with various deformation responses (i.e., transformation stresses), which also accounts for the effects of mean strain and stress. Mechanical response of superelastic NiTi is highly sensitive to chemical composition, material processing, as well as operating temperature; therefore, significantly different deformation responses may be obtained for seemingly identical NiTi specimens. In this paper, a fatigue damage parameter is proposed that can be used for fatigue life prediction of superelastic NiTi alloys with different mechanical properties such as loading and unloading transformation stresses, modulus of elasticity, and austenite-to-martensite start and finish strains. Moreover, the model is capable of capturing the effects of tensile mean strain and stress on the fatigue behavior. Fatigue life predictions using the proposed damage parameter for specimens with different cyclic stress responses, tested at various strain ratios (R ɛ = ɛ min /ɛ max) are shown to be in very good agreement with the experimentally observed fatigue lives.

  10. Numerical Simulation of Optically-Induced Dielectrophoresis Using a Voltage-Transformation-Ratio Model

    Directory of Open Access Journals (Sweden)

    Sheng-Chieh Huang

    2013-02-01

    Full Text Available Optically-induced dielectrophoresis (ODEP has been extensively used for the manipulation and separation of cells, beads and micro-droplets in microfluidic devices. With this approach, non-uniform electric fields induced by light projected on a photoconductive layer can be used to generate attractive or repulsive forces on dielectric materials. Then, moving these light patterns can be used for the manipulation of particles in the microfluidic devices. This study reports on the results from numerical simulation of the ODEP platform using a new model based on a voltage transformation ratio, which takes the effective electrical voltage into consideration. Results showed that the numerical simulation was in reasonably agreement with experimental data for the manipulation of polystyrene beads and emulsion droplets, with a coefficient of variation less than 6.2% (n = 3. The proposed model can be applied to simulations of the ODEP force and may provide a reliable tool for estimating induced dielectrophoretic forces and electric fields, which is crucial for microfluidic applications.

  11. Estimating the ratios of the stationary distribution values for Markov chains modeling evolutionary algorithms.

    Science.gov (United States)

    Mitavskiy, Boris; Cannings, Chris

    2009-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.

  12. Estimation of Radius Ratio in a Fin Using Inverse CFD Model

    Directory of Open Access Journals (Sweden)

    Ranjan Das

    2011-03-01

    Full Text Available

    This article deals with the retrieval of parameters such as the radius-ratio in a rectangular fin using an inverse CFD model involving a mixed boundary condition. At first, the temperature field is obtained from a forward problem using the finite difference method (FDM in which the inner and outer radii or the radius-ratio is assumed to be known. Next, by an inverse approach using the FDM in conjunction with the genetic algorithm (GA, the inner and outer radii or the radius-ratio is retrieved. To accomplish the task, an objective function represented by the sum of square of the error between the guessed and the exact/measured temperature fields is minimized. Apart from demonstrating the suitability of the FDM

  13. Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach

    Science.gov (United States)

    Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur

    2010-01-01

    A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve

  14. Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach

    Science.gov (United States)

    Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur

    2010-01-01

    A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve

  15. Climate change impact assessment on Veneto and Friuli Plain groundwater. Part I: an integrated modeling approach for hazard scenario construction.

    Science.gov (United States)

    Baruffi, F; Cisotto, A; Cimolino, A; Ferri, M; Monego, M; Norbiato, D; Cappelletto, M; Bisaglia, M; Pretner, A; Galli, A; Scarinci, A; Marsala, V; Panelli, C; Gualdi, S; Bucchignani, E; Torresan, S; Pasini, S; Critto, A; Marcomini, A

    2012-12-01

    Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life+ project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced

  16. Filling high aspect ratio trenches by superconformal chemical vapor deposition: Predictive modeling and experiment

    Science.gov (United States)

    Wang, Wenjiao B.; Abelson, John R.

    2014-11-01

    Complete filling of a deep recessed structure with a second material is a challenge in many areas of nanotechnology fabrication. A newly discovered superconformal coating method, applicable in chemical vapor deposition systems that utilize a precursor in combination with a co-reactant, can solve this problem. However, filling is a dynamic process in which the trench progressively narrows and the aspect ratio (AR) increases. This reduces species diffusion within the trench and may drive the component partial pressures out of the regime for superconformal coating. We therefore derive two theoretical models that can predict the possibility for filling. First, we recast the diffusion-reaction equation for the case of a sidewall with variable taper angle. This affords a definition of effective AR, which is larger than the nominal AR due to the reduced species transport. We then derive the coating profile, both for superconformal and for conformal coating. The critical (most difficult) step in the filling process occurs when the sidewalls merge at the bottom of the trench to form the V shape. Experimentally, for the Mg(DMADB)2/H2O system and a starting AR = 9, this model predicts that complete filling will not be possible, whereas experimentally we do obtain complete filling. We then hypothesize that glancing-angle, long-range transport of species may be responsible for the better than predicted filling. To account for the variable range of species transport, we construct a ballistic transport model. This incorporates the incident flux from outside the structure, cosine law re-emission from surfaces, and line-of-sight transport between internal surfaces. We cast the transport probability between all positions within the trench into a matrix that represents the redistribution of flux after one cycle of collisions. Matrix manipulation then affords a computationally efficient means to determine the steady-state flux distribution and growth rate for a given taper angle. The

  17. CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.

    1993-06-01

    CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.

  18. Ratio of Real to Imaginary for pp and (p)p Elastic Scatterings in QCD Inspired Model

    Institute of Scientific and Technical Information of China (English)

    LU Juan; MA Wei-Xing; HE Xiao-Rong

    2007-01-01

    We use the QCD inspired model to analyze the ratio of the real to the imaginary for pp and 5p elastic scatterings. A calculation for the ratio of the real to the imaginary is performed in which the contributions from gluongluon interaction, quark-quark interaction, quark-gluon interaction, and odd eikonal profile function are included. Our results show that the QCD inspired model gives a good fit to the LHC experimental data.

  19. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    in the consumer data as consumers 1) implicitly assumed that the novel foods had substantially improved agronomic properties and 2) assumed that all novel foods were governed by the same body of legislation that applies to GM foods. The last misconception might influence consumer trust in risk management when...... between technical experts and consumers e.g. over the nature of the hazards on which risk assessments should focus and perceptions of insufficient openness about uncertainties in risk assessment. The consumers part of the EU-project, NOFORISK, investigate the disagreement by comparing laypeople...... consumers realize that novel foods other than GM foods do not have to undergo environmental risk assessment. Another implication for risk management appeared as consumers did not demand any participatory elements in the risk analysis process. Consumers talked extensively about normative, governance...

  20. CAirTOX, An inter-media transfer model for assessing indirect exposures to hazardous air contaminants

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.

    1994-01-01

    Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out.

  1. Soil-to-Plant Concentration Ratios for Assessing Food Chain Pathways in Biosphere Models

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.; Fellows, Robert J.; Krupka, Kenneth M.

    2007-10-01

    This report describes work performed for the U.S. Nuclear Regulatory Commission’s project Assessment of Food Chain Pathway Parameters in Biosphere Models, which was established to assess and evaluate a number of key parameters used in the food-chain models used in performance assessments of radioactive waste disposal facilities. Section 2 of this report summarizes characteristics of samples of soils and groundwater from three geographical regions of the United States, the Southeast, Northwest, and Southwest, and analyses performed to characterize their physical and chemical properties. Because the uptake and behavior of radionuclides in plant roots, plant leaves, and animal products depends on the chemistry of the water and soil coming in contact with plants and animals, water and soil samples collected from these regions of the United States were used in experiments at Pacific Northwest National Laboratory to determine radionuclide soil-to-plant concentration ratios. Crops and forage used in the experiments were grown in the soils, and long-lived radionuclides introduced into the groundwater provide the contaminated water used to water the grown plants. The radionuclides evaluated include 99Tc, 238Pu, and 241Am. Plant varieties include alfalfa, corn, onion, and potato. The radionuclide uptake results from this research study show how regional variations in water quality and soil chemistry affect radionuclide uptake. Section 3 summarizes the procedures and results of the uptake experiments, and relates the soil-to-plant uptake factors derived. In Section 4, the results found in this study are compared with similar values found in the biosphere modeling literature; the study’s results are generally in line with current literature, but soil- and plant-specific differences are noticeable. This food-chain pathway data may be used by the NRC staff to assess dose to persons in the reference biosphere (e.g., persons who live and work in an area potentially affected