WorldWideScience

Sample records for proportional hazard model

  1. Proportional hazards models of infrastructure system recovery

    International Nuclear Information System (INIS)

    Barker, Kash; Baroud, Hiba

    2014-01-01

    As emphasis is being placed on a system's ability to withstand and to recover from a disruptive event, collectively referred to as dynamic resilience, there exists a need to quantify a system's ability to bounce back after a disruptive event. This work applies a statistical technique from biostatistics, the proportional hazards model, to describe (i) the instantaneous rate of recovery of an infrastructure system and (ii) the likelihood that recovery occurs prior to a given point in time. A major benefit of the proportional hazards model is its ability to describe a recovery event as a function of time as well as covariates describing the infrastructure system or disruptive event, among others, which can also vary with time. The proportional hazards approach is illustrated with a publicly available electric power outage data set

  2. Functional form diagnostics for Cox's proportional hazards model.

    Science.gov (United States)

    León, Larry F; Tsai, Chih-Ling

    2004-03-01

    We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.

  3. Optimization of maintenance policy using the proportional hazard model

    Energy Technology Data Exchange (ETDEWEB)

    Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)

    2009-01-15

    The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.

  4. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    Science.gov (United States)

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  5. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  6. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...

  7. Cox proportional hazards models have more statistical power than logistic regression models in cross-sectional genetic association studies

    NARCIS (Netherlands)

    van der Net, Jeroen B.; Janssens, A. Cecile J. W.; Eijkemans, Marinus J. C.; Kastelein, John J. P.; Sijbrands, Eric J. G.; Steyerberg, Ewout W.

    2008-01-01

    Cross-sectional genetic association studies can be analyzed using Cox proportional hazards models with age as time scale, if age at onset of disease is known for the cases and age at data collection is known for the controls. We assessed to what degree and under what conditions Cox proportional

  8. ADVANCES IN RENEWAL DECISION-MAKING UTILISING THE PROPORTIONAL HAZARDS MODEL WITH VIBRATION COVARIATES

    Directory of Open Access Journals (Sweden)

    Pieter-Jan Vlok

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Increased competitiveness in the production world necessitates improved maintenance strategies to increase availabilities and drive down cost . The maintenance engineer is thus faced with the need to make more intelligent pre ventive renewal decisions . Two of the main techniques to achieve this is through Condition Monitoring (such as vibrat ion monitoring and oil anal ysis and Statistical Failure Analysis (typically using probabilistic techniques . The present paper discusses these techniques, their uses and weaknesses and then presents th e Proportional Hazard Model as an solution to most of these weaknesses. It then goes on to compare the results of the different techniques in monetary terms, using a South African case study. This comparison shows clearly that the Proportional Hazards Model is sup erior to the present t echniques and should be the preferred model for many actual maintenance situations.

    AFRIKAANSE OPSOMMING: Verhoogde vlakke van mededinging in die produksie omgewing noodsaak verbeterde instandhouding strategies om beskikbaarheid van toerusting te verhoog en koste te minimeer. Instandhoudingsingenieurs moet gevolglik meer intellegente voorkomende hernuwings besluite neem. Twee prominente tegnieke om hierdie doelwit te bereik is Toestandsmonitering (soos vibrasie monitering of olie analise en Statistiese Falingsanalise (gewoonlik m.b.v. probabilistiese metodes. In hierdie artikel beskou ons beide hierdie tegnieke, hulle gebruike en tekortkominge en stel dan die Proporsionele Gevaarkoers Model voor as 'n oplossing vir meeste van die tekortkominge. Die artikel vergelyk ook die verskillende tegnieke in geldelike terme deur gebruik te maak van 'n Suid-Afrikaanse gevalle studie. Hierdie vergelyking wys duidelik-uit dat die Proporsionele Gevaarkoers Model groter beloft e inhou as die huidige tegni eke en dat dit die voorkeur oplossing behoort te wees in baie werklike instandhoudings situasies.

  9. Proportional hazards model with varying coefficients for length-biased data.

    Science.gov (United States)

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  10. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  11. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.

    Science.gov (United States)

    Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N

    2016-01-01

    Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  12. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    Science.gov (United States)

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  13. Mediation Analysis with Survival Outcomes: Accelerated Failure Time Versus Proportional Hazards Models

    Directory of Open Access Journals (Sweden)

    Lois A Gelfand

    2016-03-01

    Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.

  14. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    Science.gov (United States)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  15. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  16. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  17. a study of the slope of cox proportional hazard and weibull models

    African Journals Online (AJOL)

    Adejumo & Ahmadu

    known and the hazard function is completely specified except for the values of the ... through the air when people who have an active TB infection, cough, sneeze ... The increase of. TB incidence is highest in Africa and Asia, areas with the highest ... further complicating treatment by increasing the length and cost of therapy.

  18. Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model

    NARCIS (Netherlands)

    X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)

    2016-01-01

    textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation

  19. Estimation in the positive stable shared frailty Cox proportional hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Pipper, Christian Bressen

    2005-01-01

    model in situations where the correlated survival data show a decreasing association with time. In this paper, we devise a likelihood based estimation procedure for the positive stable shared frailty Cox model, which is expected to obtain high efficiency. The proposed estimator is provided with large...

  20. Determining the effects of patient casemix on length of hospital stay: a proportional hazards frailty model approach.

    Science.gov (United States)

    Lee, A H; Yau, K K

    2001-01-01

    To identify factors associated with hospital length of stay (LOS) and to model variations in LOS within Diagnosis Related Groups (DRGs). A proportional hazards frailty modelling approach is proposed that accounts for patient transfers and the inherent correlation of patients clustered within hospitals. The investigation is based on patient discharge data extracted for a group of obstetrical DRGs. Application of the frailty approach has highlighted several significant factors after adjustment for patient casemix and random hospital effects. In particular, patients admitted for childbirth with private medical insurance coverage have higher risk of prolonged hospitalization compared to public patients. The determination of pertinent factors provides important information to hospital management and clinicians in assessing the risk of prolonged hospitalization. The analysis also enables the comparison of inter-hospital variations across adjacent DRGs.

  1. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  2. Evaluation of protocol change in burn-care management using the Cox proportional hazards model with time-dependent covariates.

    Science.gov (United States)

    Ichida, J M; Wassell, J T; Keller, M D; Ayers, L W

    1993-02-01

    Survival analysis methods are valuable for detecting intervention effects because detailed information from patient records and sensitive outcome measures are used. The burn unit at a large university hospital replaced routine bathing with total body bathing using chlorhexidine gluconate for antimicrobial effect. A Cox proportional hazards model was used to analyse time from admission until either infection with Staphylococcus aureus or discharge for 155 patients, controlling for burn severity and two time-dependent covariates: days until first wound excision and days until first administration of prophylactic antibiotics. The risk of infection was 55 per cent higher in the historical control group, although not statistically significant. There was also some indication that early wound excision may be important as an infection-control measure for burn patients.

  3. Shared and unshared exposure measurement error in occupational cohort studies and their effects on statistical inference in proportional hazards models

    Science.gov (United States)

    Laurier, Dominique; Rage, Estelle

    2018-01-01

    Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862

  4. Statistical analysis of Caterpillar 793D haul truck engine data and through-life diagnostic information using the proportional hazards model

    Directory of Open Access Journals (Sweden)

    Carstens, W. A.

    2013-08-01

    Full Text Available Physical asset management (PAM is of increasing concern for companies in industry today. A key performance area of PAM is asset care plans (ACPs, which consist of maintenance strategies such as usage based maintenance (UBM and condition based maintenance (CBM. Data obtained from the South African mining industry was modelled using a CBM prognostic model called the proportional hazards model (PHM. Results indicated that the developed model produced estimates that were reasonable representations of reality. These findings provide an exciting basis for the development of future Weibull PHMs that could result in huge maintenance cost savings and reduced failure occurrences.

  5. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  6. Using Swiss Webster mice to model Fetal Alcohol Spectrum Disorders (FASD): An analysis of multilevel time-to-event data through mixed-effects Cox proportional hazards models.

    Science.gov (United States)

    Chi, Peter; Aras, Radha; Martin, Katie; Favero, Carlita

    2016-05-15

    Fetal Alcohol Spectrum Disorders (FASD) collectively describes the constellation of effects resulting from human alcohol consumption during pregnancy. Even with public awareness, the incidence of FASD is estimated to be upwards of 5% in the general population and is becoming a global health problem. The physical, cognitive, and behavioral impairments of FASD are recapitulated in animal models. Recently rodent models utilizing voluntary drinking paradigms have been developed that accurately reflect moderate consumption, which makes up the majority of FASD cases. The range in severity of FASD characteristics reflects the frequency, dose, developmental timing, and individual susceptibility to alcohol exposure. As most rodent models of FASD use C57BL/6 mice, there is a need to expand the stocks of mice studied in order to more fully understand the complex neurobiology of this disorder. To that end, we allowed pregnant Swiss Webster mice to voluntarily drink ethanol via the drinking in the dark (DID) paradigm throughout their gestation period. Ethanol exposure did not alter gestational outcomes as determined by no significant differences in maternal weight gain, maternal liquid consumption, litter size, or pup weight at birth or weaning. Despite seemingly normal gestation, ethanol-exposed offspring exhibit significantly altered timing to achieve developmental milestones (surface righting, cliff aversion, and open field traversal), as analyzed through mixed-effects Cox proportional hazards models. These results confirm Swiss Webster mice as a viable option to study the incidence and causes of ethanol-induced neurobehavioral alterations during development. Future studies in our laboratory will investigate the brain regions and molecules responsible for these behavioral changes. Copyright © 2016. Published by Elsevier B.V.

  7. Predictors of time to relapse in amphetamine-type substance users in the matrix treatment program in Iran: a Cox proportional hazard model application.

    Science.gov (United States)

    Moeeni, Maryam; Razaghi, Emran M; Ponnet, Koen; Torabi, Fatemeh; Shafiee, Seyed Ali; Pashaei, Tahereh

    2016-07-26

    The aim of this study was to determine which predictors influence the risk of relapse among a cohort of amphetamine-type substance (ATS) users in Iran. A Cox proportional hazards model was conducted to determine factors associated with the relapse time in the Matrix treatment program provided by the Iranian National Center of Addiction Studies (INCAS) between March 2010 and October 2011. Participating in more treatment sessions was associated with a lower probability of relapse. On the other hand, patients with less family support, longer dependence on ATS, and those with an experience of casual sex and a history of criminal offenses were more likely to relapse. This study broadens our understanding of factors influencing the risk of relapse in ATS use among an Iranian sample. The findings can guide practitioners during the treatment program.

  8. Artificial neural networks versus proportional hazards Cox models to predict 45-year all-cause mortality in the Italian Rural Areas of the Seven Countries Study

    Directory of Open Access Journals (Sweden)

    Puddu Paolo

    2012-07-01

    Full Text Available Abstract Background Projection pursuit regression, multilayer feed-forward networks, multivariate adaptive regression splines and trees (including survival trees have challenged classic multivariable models such as the multiple logistic function, the proportional hazards life table Cox model (Cox, the Poisson’s model, and the Weibull’s life table model to perform multivariable predictions. However, only artificial neural networks (NN have become popular in medical applications. Results We compared several Cox versus NN models in predicting 45-year all-cause mortality (45-ACM by 18 risk factors selected a priori: age; father life status; mother life status; family history of cardiovascular diseases; job-related physical activity; cigarette smoking; body mass index (linear and quadratic terms; arm circumference; mean blood pressure; heart rate; forced expiratory volume; serum cholesterol; corneal arcus; diagnoses of cardiovascular diseases, cancer and diabetes; minor ECG abnormalities at rest. Two Italian rural cohorts of the Seven Countries Study, made up of men aged 40 to 59 years, enrolled and first examined in 1960 in Italy. Cox models were estimated by: a forcing all factors; b a forward-; and c a backward-stepwise procedure. Observed cases of deaths and of survivors were computed in decile classes of estimated risk. Forced and stepwise NN were run and compared by C-statistics (ROC analysis with the Cox models. Out of 1591 men, 1447 died. Model global accuracies were extremely high by all methods (ROCs > 0.810 but there was no clear-cut superiority of any model to predict 45-ACM. The highest ROCs (> 0.838 were observed by NN. There were inter-model variations to select predictive covariates: whereas all models concurred to define the role of 10 covariates (mainly cardiovascular risk factors, family history, heart rate and minor ECG abnormalities were not contributors by Cox models but were so by forced NN. Forced expiratory volume and arm

  9. The role of social networks and media receptivity in predicting age of smoking initiation: a proportional hazards model of risk and protective factors.

    Science.gov (United States)

    Unger, J B; Chen, X

    1999-01-01

    The increasing prevalence of adolescent smoking demonstrates the need to identify factors associated with early smoking initiation. Previous studies have shown that smoking by social network members and receptivity to pro-tobacco marketing are associated with smoking among adolescents. It is not clear, however, whether these variables also are associated with the age of smoking initiation. Using data from 10,030 California adolescents, this study identified significant correlates of age of smoking initiation using bivariate methods and a multivariate proportional hazards model. Age of smoking initiation was earlier among those adolescents whose friends, siblings, or parents were smokers, and among those adolescents who had a favorite tobacco advertisement, had received tobacco promotional items, or would be willing to use tobacco promotional items. Results suggest that the smoking behavior of social network members and pro-tobacco media influences are important determinants of age of smoking initiation. Because early smoking initiation is associated with higher levels of addiction in adulthood, tobacco control programs should attempt to counter these influences.

  10. Limitations of Cox Proportional Hazards Analysis in Mortality Prediction of Patients with Acute Coronary Syndrome

    Directory of Open Access Journals (Sweden)

    Babińska Magdalena

    2015-12-01

    Full Text Available The aim of this study was to evaluate the possibility of incorrect assessment of mortality risk factors in a group of patients affected by acute coronary syndrome, due to the lack of hazard proportionality in the Cox regression model. One hundred and fifty consecutive patients with acute coronary syndrome (ACS and no age limit were enrolled. Univariable and multivariable Cox proportional hazard analyses were performed. The proportional hazard assumptions were verified using Schoenfeld residuals, χ2 test and rank correlation coefficient t between residuals and time. In the total group of 150 patients, 33 (22.0% deaths from any cause were registered in the follow-up time period of 64 months. The non-survivors were significantly older and had increased prevalence of diabetes and erythrocyturia, longer history of coronary artery disease, higher concentrations of serum creatinine, cystatin C, uric acid, glucose, C-reactive protein (CRP, homocysteine and B-type natriuretic peptide (NT-proBNP, and lower concentrations of serum sodium. No significant differences in echocardiography parameters were observed between groups. The following factors were risk of death factors and fulfilled the proportional hazard assumption in the univariable model: smoking, occurrence of diabetes and anaemia, duration of coronary artery disease, and abnormal serum concentrations of uric acid, sodium, homocysteine, cystatin C and NT-proBNP, while in the multivariable model, the risk of death factors were: smoking and elevated concentrations of homocysteine and NT-proBNP. The study has demonstrated that violation of the proportional hazard assumption in the Cox regression model may lead to creating a false model that does not include only time-independent predictive factors.

  11. A ¤flexible additive multiplicative hazard model

    DEFF Research Database (Denmark)

    Martinussen, T.; Scheike, T. H.

    2002-01-01

    Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...

  12. The Application of Extended Cox Proportional Hazard Method for Estimating Survival Time of Breast Cancer

    Science.gov (United States)

    Husain, Hartina; Astuti Thamrin, Sri; Tahir, Sulaiha; Mukhlisin, Ahmad; Mirna Apriani, M.

    2018-03-01

    Breast cancer is one type of cancer that is the leading cause of death worldwide. This study aims to model the factors that affect the survival time and rate of cure of breast cancer patients. The extended cox model, which is a modification of the proportional hazard cox model in which the proportional hazard assumptions are not met, is used in this study. The maximum likelihood estimation approach is used to estimate the parameters of the model. This method is then applied to medical record data of breast cancer patient in 2011-2016, which is taken from Hasanuddin University Education Hospital. The results obtained indicate that the factors that affect the survival time of breast cancer patients are malignancy and leukocyte levels.

  13. Determining the effect of worker exposure conditions on the risk of hearing loss in noisy industrial workroom using Cox proportional hazard model.

    Science.gov (United States)

    Aliabadi, Mohsen; Fereidan, Mohammad; Farhadian, Maryam; Tajik, Leila

    2015-01-01

    In noisy workrooms, exposure conditions such as noise level, exposure duration and use of hearing protection devices are contributory factors to hearing loss. The aim of this study was to determine the effect of exposure conditions on the risk of hearing loss using the Cox model. Seventy workers, employed in a press workshop, were selected to study their hearing threshold using an audiometric test. Their noise exposure histories also were analyzed. The results of the Cox model showed that the job type, smoking and the use of protection devices were effective to induce hearing loss. The relative risk of hearing loss in smokers was 1.1 times of non-smokers The relative risk of hearing loss in workers with the intermittent use of protection devices was 3.3 times those who used these devices continuously. The Cox model could analyze the effect of exposure conditions on hearing loss and provides useful information for managers in order to improve hearing conservation programs.

  14. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network.

    Science.gov (United States)

    Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval

    2018-02-26

    Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.

  15. Multistate cohort models with proportional transfer rates

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2006-01-01

    of transfer rates. The two living state case and hierarchical multistate models with any number of living states are analyzed in detail. Applying our approach to 1997 U.S. fertility data, we find that observed rates of parity progression are roughly proportional over age. Our proportional transfer rate...... approach provides trajectories by parity state and facilitates analyses of the implications of changes in parity rate levels and patterns. More women complete childbearing at parity 2 than at any other parity, and parity 2 would be the modal parity in models with total fertility rates (TFRs) of 1.40 to 2......We present a new, broadly applicable approach to summarizing the behavior of a cohort as it moves through a variety of statuses (or states). The approach is based on the assumption that all rates of transfer maintain a constant ratio to one another over age. We present closed-form expressions...

  16. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

    NARCIS (Netherlands)

    Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

    PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

  17. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  18. A phenomenological SMA model for combined axial–torsional proportional/non-proportional loading conditions

    International Nuclear Information System (INIS)

    Bodaghi, M.; Damanpack, A.R.; Aghdam, M.M.; Shakeri, M.

    2013-01-01

    In this paper, a simple and robust phenomenological model for shape memory alloys (SMAs) is proposed to simulate main features of SMAs under uniaxial as well as biaxial combined axial–torsional proportional/non-proportional loadings. The constitutive model for polycrystalline SMAs is developed within the framework of continuum thermodynamics of irreversible processes. The model nominates the volume fractions of self-accommodated and oriented martensite as scalar internal variables and the preferred direction of oriented martensitic variants as directional internal variable. An algorithm is introduced to develop explicit relationships for the thermo-mechanical behavior of SMAs under uniaxial and biaxial combined axial–torsional proportional/non-proportional loading conditions and also thermal loading. It is shown that the model is able to simulate main aspects of SMAs including self-accommodation, martensitic transformation, orientation and reorientation of martensite, shape memory effect, ferro-elasticity and pseudo-elasticity. A description of the time-discrete counterpart of the proposed SMA model is presented. Experimental results of uniaxial tension and biaxial combined tension–torsion non-proportional tests are simulated and a good qualitative correlation between numerical and experimental responses is achieved. Due to simplicity and accuracy, the model is expected to be used in the future studies dealing with the analysis of SMA devices in which two stress components including one normal and one shear stress are dominant

  19. A parsimonious model for the proportional control valve

    OpenAIRE

    Elmer, KF; Gentle, CR

    2001-01-01

    A generic non-linear dynamic model of a direct-acting electrohydraulic proportional solenoid valve is presented. The valve consists of two subsystems-s-a spool assembly and one or two unidirectional proportional solenoids. These two subsystems are modelled separately. The solenoid is modelled as a non-linear resistor-inductor combination, with inductance parameters that change with current. An innovative modelling method has been used to represent these components. The spool assembly is model...

  20. The proportional odds cumulative incidence model for competing risks

    DEFF Research Database (Denmark)

    Eriksson, Frank; Li, Jianing; Scheike, Thomas

    2015-01-01

    We suggest an estimator for the proportional odds cumulative incidence model for competing risks data. The key advantage of this model is that the regression parameters have the simple and useful odds ratio interpretation. The model has been considered by many authors, but it is rarely used...... in practice due to the lack of reliable estimation procedures. We suggest such procedures and show that their performance improve considerably on existing methods. We also suggest a goodness-of-fit test for the proportional odds assumption. We derive the large sample properties and provide estimators...

  1. Populational Growth Models Proportional to Beta Densities with Allee Effect

    Science.gov (United States)

    Aleixo, Sandra M.; Rocha, J. Leonel; Pestana, Dinis D.

    2009-05-01

    We consider populations growth models with Allee effect, proportional to beta densities with shape parameters p and 2, where the dynamical complexity is related with the Malthusian parameter r. For p>2, these models exhibit a population dynamics with natural Allee effect. However, in the case of 1models do not include this effect. In order to inforce it, we present some alternative models and investigate their dynamics, presenting some important results.

  2. Application of random survival forests in understanding the determinants of under-five child mortality in Uganda in the presence of covariates that satisfy the proportional and non-proportional hazards assumption.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry

    2017-09-07

    Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the

  3. Modeling lahar behavior and hazards

    Science.gov (United States)

    Manville, Vernon; Major, Jon J.; Fagents, Sarah A.

    2013-01-01

    Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.

  4. Augmenting the logrank test in the design of clinical trials in which non-proportional hazards of the treatment effect may be anticipated

    Directory of Open Access Journals (Sweden)

    Patrick Royston

    2016-02-01

    Full Text Available Abstract Background Most randomized controlled trials with a time-to-event outcome are designed assuming proportional hazards (PH of the treatment effect. The sample size calculation is based on a logrank test. However, non-proportional hazards are increasingly common. At analysis, the estimated hazards ratio with a confidence interval is usually presented. The estimate is often obtained from a Cox PH model with treatment as a covariate. If non-proportional hazards are present, the logrank and equivalent Cox tests may lose power. To safeguard power, we previously suggested a ‘joint test’ combining the Cox test with a test of non-proportional hazards. Unfortunately, a larger sample size is needed to preserve power under PH. Here, we describe a novel test that unites the Cox test with a permutation test based on restricted mean survival time. Methods We propose a combined hypothesis test based on a permutation test of the difference in restricted mean survival time across time. The test involves the minimum of the Cox and permutation test P-values. We approximate its null distribution and correct it for correlation between the two P-values. Using extensive simulations, we assess the type 1 error and power of the combined test under several scenarios and compare with other tests. We investigate powering a trial using the combined test. Results The type 1 error of the combined test is close to nominal. Power under proportional hazards is slightly lower than for the Cox test. Enhanced power is available when the treatment difference shows an ‘early effect’, an initial separation of survival curves which diminishes over time. The power is reduced under a ‘late effect’, when little or no difference in survival curves is seen for an initial period and then a late separation occurs. We propose a method of powering a trial using the combined test. The ‘insurance premium’ offered by the combined test to safeguard power under non

  5. Comparative Distributions of Hazard Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Rana Abdul Wajid

    2006-07-01

    Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.

  6. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  7. Hazard Warning: model misuse ahead

    DEFF Research Database (Denmark)

    Dickey-Collas, M.; Payne, Mark; Trenkel, V.

    2014-01-01

    The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based...... first step in assessing the utility of a model in the context of knowledge for decision-making in management...

  8. Geospatial subsidence hazard modelling at Sterkfontein Caves ...

    African Journals Online (AJOL)

    The geo-hazard subsidence model includes historic subsidence occurrances, terrain (water flow) and water accumulation. Water accumulating on the surface will percolate and reduce the strength of the soil mass, possibly inducing subsidence. Areas for further geotechnical investigation are identified, demonstrating that a ...

  9. Modeling and Hazard Analysis Using STPA

    Science.gov (United States)

    Ishimatsu, Takuto; Leveson, Nancy; Thomas, John; Katahira, Masa; Miyamoto, Yuko; Nakao, Haruka

    2010-09-01

    A joint research project between MIT and JAXA/JAMSS is investigating the application of a new hazard analysis to the system and software in the HTV. Traditional hazard analysis focuses on component failures but software does not fail in this way. Software most often contributes to accidents by commanding the spacecraft into an unsafe state(e.g., turning off the descent engines prematurely) or by not issuing required commands. That makes the standard hazard analysis techniques of limited usefulness on software-intensive systems, which describes most spacecraft built today. STPA is a new hazard analysis technique based on systems theory rather than reliability theory. It treats safety as a control problem rather than a failure problem. The goal of STPA, which is to create a set of scenarios that can lead to a hazard, is the same as FTA but STPA includes a broader set of potential scenarios including those in which no failures occur but the problems arise due to unsafe and unintended interactions among the system components. STPA also provides more guidance to the analysts that traditional fault tree analysis. Functional control diagrams are used to guide the analysis. In addition, JAXA uses a model-based system engineering development environment(created originally by Leveson and called SpecTRM) which also assists in the hazard analysis. One of the advantages of STPA is that it can be applied early in the system engineering and development process in a safety-driven design process where hazard analysis drives the design decisions rather than waiting until reviews identify problems that are then costly or difficult to fix. It can also be applied in an after-the-fact analysis and hazard assessment, which is what we did in this case study. This paper describes the experimental application of STPA to the JAXA HTV in order to determine the feasibility and usefulness of the new hazard analysis technique. Because the HTV was originally developed using fault tree analysis

  10. Hazard identification based on plant functional modelling

    International Nuclear Information System (INIS)

    Rasmussen, B.; Whetton, C.

    1993-10-01

    A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)

  11. Proportionality lost - proportionality regained?

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2010-01-01

    In recent years, the European Court of Justice (the ECJ) seems to have accepted restrictions on the freedom of establishment and other basic freedoms, despite the fact that a more thorough proportionality test would have revealed that the restriction in question did not pass the 'rule of reason' ...

  12. The New Italian Seismic Hazard Model

    Science.gov (United States)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme

  13. Modeling Compound Flood Hazards in Coastal Embayments

    Science.gov (United States)

    Moftakhari, H.; Schubert, J. E.; AghaKouchak, A.; Luke, A.; Matthew, R.; Sanders, B. F.

    2017-12-01

    Coastal cities around the world are built on lowland topography adjacent to coastal embayments and river estuaries, where multiple factors threaten increasing flood hazards (e.g. sea level rise and river flooding). Quantitative risk assessment is required for administration of flood insurance programs and the design of cost-effective flood risk reduction measures. This demands a characterization of extreme water levels such as 100 and 500 year return period events. Furthermore, hydrodynamic flood models are routinely used to characterize localized flood level intensities (i.e., local depth and velocity) based on boundary forcing sampled from extreme value distributions. For example, extreme flood discharges in the U.S. are estimated from measured flood peaks using the Log-Pearson Type III distribution. However, configuring hydrodynamic models for coastal embayments is challenging because of compound extreme flood events: events caused by a combination of extreme sea levels, extreme river discharges, and possibly other factors such as extreme waves and precipitation causing pluvial flooding in urban developments. Here, we present an approach for flood risk assessment that coordinates multivariate extreme analysis with hydrodynamic modeling of coastal embayments. First, we evaluate the significance of correlation structure between terrestrial freshwater inflow and oceanic variables; second, this correlation structure is described using copula functions in unit joint probability domain; and third, we choose a series of compound design scenarios for hydrodynamic modeling based on their occurrence likelihood. The design scenarios include the most likely compound event (with the highest joint probability density), preferred marginal scenario and reproduced time series of ensembles based on Monte Carlo sampling of bivariate hazard domain. The comparison between resulting extreme water dynamics under the compound hazard scenarios explained above provides an insight to the

  14. Quantitative occupational risk model: Single hazard

    International Nuclear Information System (INIS)

    Papazoglou, I.A.; Aneziris, O.N.; Bellamy, L.J.; Ale, B.J.M.; Oh, J.

    2017-01-01

    A model for the quantification of occupational risk of a worker exposed to a single hazard is presented. The model connects the working conditions and worker behaviour to the probability of an accident resulting into one of three types of consequence: recoverable injury, permanent injury and death. Working conditions and safety barriers in place to reduce the likelihood of an accident are included. Logical connections are modelled through an influence diagram. Quantification of the model is based on two sources of information: a) number of accidents observed over a period of time and b) assessment of exposure data of activities and working conditions over the same period of time and the same working population. Effectiveness of risk reducing measures affecting the working conditions, worker behaviour and/or safety barriers can be quantified through the effect of these measures on occupational risk. - Highlights: • Quantification of occupational risk from a single hazard. • Influence diagram connects working conditions, worker behaviour and safety barriers. • Necessary data include the number of accidents and the total exposure of worker • Effectiveness of risk reducing measures is quantified through the impact on the risk • An example illustrates the methodology.

  15. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  16. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  17. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1982-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined

  18. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  19. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  20. Converging or Crossing Curves: Untie the Gordian Knot or Cut it? Appropriate Statistics for Non-Proportional Hazards in Decitabine DACO-016 Study (AML).

    Science.gov (United States)

    Tomeczkowski, Jörg; Lange, Ansgar; Güntert, Andreas; Thilakarathne, Pushpike; Diels, Joris; Xiu, Liang; De Porre, Peter; Tapprich, Christoph

    2015-09-01

    Among patients with acute myeloid leukemia (AML), the DACO-016 randomized study showed reduction in mortality for decitabine [Dacogen(®) (DAC), Eisai Inc., Woodcliff Lake, NJ, USA] compared with treatment choice (TC): at primary analysis the hazard ratio (HR) was 0.85 (95% confidence interval 0.69-1.04; stratified log-rank P = 0.108). With two interim analyses, two-sided alpha was adjusted to 0.0462. With 1-year additional follow-up the HR reached 0.82 (nominal P = 0.0373). These data resulted in approval of DAC in the European Union, though not in the United States. Though pre-specified, the log-rank test could be considered not optimal to assess the observed survival difference because of the non-proportional hazard nature of the survival curves. We applied the Wilcoxon test as a sensitivity analysis. Patients were randomized to DAC (N = 242) or TC (N = 243). One-hundred and eight (44.4%) patients in the TC arm and 91 (37.6%) patients in the DAC arm selectively crossed over to subsequent disease modifying therapies at progression, which might impact the survival beyond the median with resultant converging curves (and disproportional hazards). The stratified Wilcoxon test showed a significant improvement in median (CI 95%) overall survival with DAC [7.7 (6.2; 9.2) months] versus TC [5.0 (4.3; 6.3) months; P = 0.0458]. Wilcoxon test indicated significant increase in survival for DAC versus TC compared to log-rank test. Janssen-Cilag GmbH.

  1. An estimating equation for parametric shared frailty models with marginal additive hazards

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Martinussen, Torben

    2004-01-01

    Multivariate failure time data arise when data consist of clusters in which the failure times may be dependent. A popular approach to such data is the marginal proportional hazards model with estimation under the working independence assumption. In some contexts, however, it may be more reasonable...

  2. Discrimination of numerical proportions: A comparison of binomial and Gaussian models.

    Science.gov (United States)

    Raidvee, Aire; Lember, Jüri; Allik, Jüri

    2017-01-01

    Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.

  3. A Partial Proportional Odds Model for Pedestrian Crashes at Mid-Blocks in Melbourne Metropolitan Area

    Directory of Open Access Journals (Sweden)

    Toran Pour Alireza

    2016-01-01

    Full Text Available Pedestrian crashes account for 11% of all reported traffic crashes in Melbourne metropolitan area between 2004 and 2013. There are very limited studies on pedestrian accidents at mid-blocks. Mid-block crashes account for about 46% of the total pedestrian crashes in Melbourne metropolitan area. Meanwhile, about 50% of all pedestrian fatalities occur at mid-blocks. In this research, Partial Proportional Odds (PPO model is applied to examine vehicle-pedestrian crash severity at mid-blocks in Melbourne metropolitan area. The PPO model is a logistic regression model that allows the covariates that meet the proportional odds assumption to affect different crash severity levels with the same magnitude; whereas the covariates that do not meet the proportional odds assumption can have different effects on different severity levels. In this research vehicle-pedestrian crashes at mid-blocks are analysed for first time. In addition, some factors such as distance of crashes to public transport stops, average road slope and some social characteristics are considered to develop the model in this research for first time. Results of PPO model show that speed limit, light condition, pedestrian age and gender, and vehicle type are the most significant factors that influence vehicle-pedestrian crash severity at mid-blocks.

  4. Bayesian Predictive Inference of a Proportion Under a Twofold Small-Area Model

    Directory of Open Access Journals (Sweden)

    Nandram Balgobin

    2016-03-01

    Full Text Available We extend the twofold small-area model of Stukel and Rao (1997; 1999 to accommodate binary data. An example is the Third International Mathematics and Science Study (TIMSS, in which pass-fail data for mathematics of students from US schools (clusters are available at the third grade by regions and communities (small areas. We compare the finite population proportions of these small areas. We present a hierarchical Bayesian model in which the firststage binary responses have independent Bernoulli distributions, and each subsequent stage is modeled using a beta distribution, which is parameterized by its mean and a correlation coefficient. This twofold small-area model has an intracluster correlation at the first stage and an intercluster correlation at the second stage. The final-stage mean and all correlations are assumed to be noninformative independent random variables. We show how to infer the finite population proportion of each area. We have applied our models to synthetic TIMSS data to show that the twofold model is preferred over a onefold small-area model that ignores the clustering within areas. We further compare these models using a simulation study, which shows that the intracluster correlation is particularly important.

  5. Bibliography - Existing Guidance for External Hazard Modelling

    International Nuclear Information System (INIS)

    Decker, Kurt

    2015-01-01

    The bibliography of deliverable D21.1 includes existing international and national guidance documents and standards on external hazard assessment together with a selection of recent scientific papers, which are regarded to provide useful information on the state of the art of external event modelling. The literature database is subdivided into International Standards, National Standards, and Science Papers. The deliverable is treated as a 'living document' which is regularly updated as necessary during the lifetime of ASAMPSA-E. The current content of the database is about 140 papers. Most of the articles are available as full-text versions in PDF format. The deliverable is available as an EndNote X4 database and as text files. The database includes the following information: Reference, Key words, Abstract (if available), PDF file of the original paper (if available), Notes (comments by the ASAMPSA-E consortium if available) The database is stored at the ASAMPSA-E FTP server hosted by IRSN. PDF files of original papers are accessible through the EndNote software

  6. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  7. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  8. Macro-level vulnerable road users crash analysis: A Bayesian joint modeling approach of frequency and proportion.

    Science.gov (United States)

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2017-10-01

    This study aims at contributing to the literature on pedestrian and bicyclist safety by building on the conventional count regression models to explore exogenous factors affecting pedestrian and bicyclist crashes at the macroscopic level. In the traditional count models, effects of exogenous factors on non-motorist crashes were investigated directly. However, the vulnerable road users' crashes are collisions between vehicles and non-motorists. Thus, the exogenous factors can affect the non-motorist crashes through the non-motorists and vehicle drivers. To accommodate for the potentially different impact of exogenous factors we convert the non-motorist crash counts as the product of total crash counts and proportion of non-motorist crashes and formulate a joint model of the negative binomial (NB) model and the logit model to deal with the two parts, respectively. The formulated joint model is estimated using non-motorist crash data based on the Traffic Analysis Districts (TADs) in Florida. Meanwhile, the traditional NB model is also estimated and compared with the joint model. The result indicates that the joint model provides better data fit and can identify more significant variables. Subsequently, a novel joint screening method is suggested based on the proposed model to identify hot zones for non-motorist crashes. The hot zones of non-motorist crashes are identified and divided into three types: hot zones with more dangerous driving environment only, hot zones with more hazardous walking and cycling conditions only, and hot zones with both. It is expected that the joint model and screening method can help decision makers, transportation officials, and community planners to make more efficient treatments to proactively improve pedestrian and bicyclist safety. Published by Elsevier Ltd.

  9. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  10. An Optimal Allocation Model of Public Transit Mode Proportion for the Low-Carbon Transportation

    Directory of Open Access Journals (Sweden)

    Linjun Lu

    2015-01-01

    Full Text Available Public transit has been widely recognized as a potential way to develop low-carbon transportation. In this paper, an optimal allocation model of public transit mode proportion (MPMP has been built to achieve the low-carbon public transit. Optimal ratios of passenger traffic for rail, bus, and taxi are derived by running the model using typical data. With different values of traffic demand, construction cost, travel time, and accessibilities, MPMP can generate corresponding optimal ratios, benefiting decision impacts analysis and decision makers. Instead of considering public transit as a united system, it is separated into units in this paper. And Shanghai is used to test model validity and practicality.

  11. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  12. A Model for Generating Multi-hazard Scenarios

    Science.gov (United States)

    Lo Jacomo, A.; Han, D.; Champneys, A.

    2017-12-01

    Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.

  13. A Novel Load Capacity Model with a Tunable Proportion of Load Redistribution against Cascading Failures

    Directory of Open Access Journals (Sweden)

    Zhen-Hao Zhang

    2018-01-01

    Full Text Available Defence against cascading failures is of great theoretical and practical significance. A novel load capacity model with a tunable proportion is proposed. We take degree and clustering coefficient into account to redistribute the loads of broken nodes. The redistribution is local, where the loads of broken nodes are allocated to their nearest neighbours. Our model has been applied on artificial networks as well as two real networks. Simulation results show that networks get more vulnerable and sensitive to intentional attacks along with the decrease of average degree. In addition, the critical threshold from collapse to intact states is affected by the tunable parameter. We can adjust the tunable parameter to get the optimal critical threshold and make the systems more robust against cascading failures.

  14. Proportional reasoning

    DEFF Research Database (Denmark)

    Dole, Shelley; Hilton, Annette; Hilton, Geoff

    2015-01-01

    Proportional reasoning is widely acknowledged as a key to success in school mathematics, yet students’ continual difficulties with proportion-related tasks are well documented. This paper draws on a large research study that aimed to support 4th to 9th grade teachers to design and implement tasks...

  15. On the modeling of uplink inter-cell interference based on proportional fair scheduling

    KAUST Repository

    Tabassum, Hina

    2012-10-03

    We derive a semi-analytical expression for the uplink inter-cell interference (ICI) assuming proportional fair scheduling (with a maximum normalized signal-to-noise ratio (SNR) criterion) deployed in the cellular network. The derived expression can be customized for different models of channel statistics that can capture path loss, shadowing, and fading. Firstly, we derive an expression for the distribution of the locations of the allocated user in a given cell. Then, we derive the distribution and moment generating function of the uplink ICI from one interfering cell. Finally, we determine the moment generating function of the cumulative uplink ICI from all interfering cells. The derived expression is utilized to evaluate important network performance metrics such as outage probability and fairness among users. The accuracy of the derived expressions is verified by comparing the obtained results to Monte Carlo simulations. © 2012 IEEE.

  16. On the modeling of uplink inter-cell interference based on proportional fair scheduling

    KAUST Repository

    Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim

    2012-01-01

    We derive a semi-analytical expression for the uplink inter-cell interference (ICI) assuming proportional fair scheduling (with a maximum normalized signal-to-noise ratio (SNR) criterion) deployed in the cellular network. The derived expression can be customized for different models of channel statistics that can capture path loss, shadowing, and fading. Firstly, we derive an expression for the distribution of the locations of the allocated user in a given cell. Then, we derive the distribution and moment generating function of the uplink ICI from one interfering cell. Finally, we determine the moment generating function of the cumulative uplink ICI from all interfering cells. The derived expression is utilized to evaluate important network performance metrics such as outage probability and fairness among users. The accuracy of the derived expressions is verified by comparing the obtained results to Monte Carlo simulations. © 2012 IEEE.

  17. Modelling of rational economic proportions of the balance sheet structure of the petrochemical enterprises

    Directory of Open Access Journals (Sweden)

    G. S. Tsvetkova

    2016-01-01

    Full Text Available The paper provides the assessment of the balance sheet structure of rival companies of a petrochemical complex of the Russian Federation. J. Aubert-Kriye's method is chosen as a main methodical tool. Practical demonstration of the method is offered on the example of the enterprises of petrochemical business of PJSC “Sibur”, PJSC “Nizhnekamskneftekhim” and JSC “Sterlitamak Petrochemical Plant”. The analysis of balance sheets showed that the enterprises have elements of irrational structure. “Sibur” differs in a low share of owner’s equity and a high share of long-term liabilities. “Nizhnekamskneftekhim” is characterized by the high share of owner’s equity which use for the purposes of development of the company and it is more expensive in comparison with liabilities. “Sterlitamak Petrochemical Plant” has excessive values of liquidity rates that demonstrates accumulation of a money, their derivation in receivables. At the same time, processes of ongoing investment in upgrade of the equipment and expansion of capacities require cause necessity of support of a rational balance sheet structure of the enterprises of a petrochemical complex. On the example of “Nizhnekamskneftekhim” modeling of a rational balance sheet structure of the company is carried out. The sequence of calculations included performing diagnostics of structural distribution of current assets and sources of means; determination of structure of financial and active elements of the entity; establishment of permissible limit of change of basic proportions and ratios by criterion of solvency and financial stability. Modeling of structure of a liability and current assets on the basis of the J. Aubert-Kriye's method showed a possibility of improvement of economic indicators of “Nizhnekamskneftekhim”. Further determination of range of tolerance for elements of the liabilities and current assets will allow to provide balance of economic proportions and

  18. MCNP modelling of the wall effects observed in tissue-equivalent proportional counters.

    Science.gov (United States)

    Hoff, J L; Townsend, L W

    2002-01-01

    Tissue-equivalent proportional counters (TEPCs) utilise tissue-equivalent materials to depict homogeneous microscopic volumes of human tissue. Although both the walls and gas simulate the same medium, they respond to radiation differently. Density differences between the two materials cause distortions, or wall effects, in measurements, with the most dominant effect caused by delta rays. This study uses a Monte Carlo transport code, MCNP, to simulate the transport of secondary electrons within a TEPC. The Rudd model, a singly differential cross section with no dependence on electron direction, is used to describe the energy spectrum obtained by the impact of two iron beams on water. Based on the models used in this study, a wall-less TEPC had a higher lineal energy (keV.micron-1) as a function of impact parameter than a solid-wall TEPC for the iron beams under consideration. An important conclusion of this study is that MCNP has the ability to model the wall effects observed in TEPCs.

  19. Checking Fine and Gray subdistribution hazards model with cumulative sums of residuals

    DEFF Research Database (Denmark)

    Li, Jianing; Scheike, Thomas; Zhang, Mei Jie

    2015-01-01

    Recently, Fine and Gray (J Am Stat Assoc 94:496–509, 1999) proposed a semi-parametric proportional regression model for the subdistribution hazard function which has been used extensively for analyzing competing risks data. However, failure of model adequacy could lead to severe bias in parameter...... estimation, and only a limited contribution has been made to check the model assumptions. In this paper, we present a class of analytical methods and graphical approaches for checking the assumptions of Fine and Gray’s model. The proposed goodness-of-fit test procedures are based on the cumulative sums...

  20. An optimization model for transportation of hazardous materials

    International Nuclear Information System (INIS)

    Seyed-Hosseini, M.; Kheirkhah, A. S.

    2005-01-01

    In this paper, the optimal routing problem for transportation of hazardous materials is studied. Routing for the purpose of reducing the risk of transportation of hazardous materials has been studied and formulated by many researcher and several routing models have been presented up to now. These models can be classified into the categories: the models for routing a single movement and the models for routing multiple movements. In this paper, according to the current rules and regulations of road transportations of hazardous materials in Iran, a routing problem is designed. In this problem, the routs for several independent movements are simultaneously determined. To examine the model, the problem the transportations of two different dangerous materials in the road network of Mazandaran province in the north of Iran is formulated and solved by applying Integer programming model

  1. Semi-parametric proportional intensity models robustness for right-censored recurrent failure data

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, S.T. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States); Landers, T.L. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)]. E-mail: landers@ou.edu; Rhoads, T.R. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)

    2005-10-01

    This paper reports the robustness of the four proportional intensity (PI) models: Prentice-Williams-Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen-Gill (AG), and Wei-Lin-Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (P {sub c}{<=}0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8

    model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0{<=}P {sub c}{<=}1.0) and for sample sizes of 60 or more.

  2. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  3. Model of predicting proportion of diesel fuel and engine oil in diesel ...

    African Journals Online (AJOL)

    Viscosity of diesel adulterated SAE 40 engine oil at varying proportions of the mixture is presented. Regression, variation of intercept and the power parameters methods are used for developing polynomial and power law functions for predicting proportion of either diesel or engine oil in diesel adulterated SAE 40 engine oil ...

  4. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  5. A conflict model for the international hazardous waste disposal dispute

    International Nuclear Information System (INIS)

    Hu Kaixian; Hipel, Keith W.; Fang, Liping

    2009-01-01

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  6. A conflict model for the international hazardous waste disposal dispute.

    Science.gov (United States)

    Hu, Kaixian; Hipel, Keith W; Fang, Liping

    2009-12-15

    A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.

  7. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  8. Proportional odds model applied to mapping of disease resistance genes in plants

    Directory of Open Access Journals (Sweden)

    Maria Helena Spyrides-Cunha

    2000-03-01

    Full Text Available Molecular markers have been used extensively to map quantitative trait loci (QTL controlling disease resistance in plants. Mapping is usually done by establishing a statistical association between molecular marker genotypes and quantitative variations in disease resistance. However, most statistical approaches require a continuous distribution of the response variable, a requirement not always met since evaluation of disease resistance is often done using visual ratings based on an ordinal scale of disease severity. This paper discusses the application of the proportional odds model to the mapping of disease resistance genes in plants amenable to expression as ordinal data. The model was used to map two resistance QTL of maize to Puccinia sorghi. The microsatellite markers bngl166 and bngl669, located on chromosomes 2 and 8, respectively, were used to genotype F2 individuals from a segregating population. Genotypes at each marker locus were then compared by assessing disease severity in F3 plants derived from the selfing of each genotyped F2 plant based on an ordinal scale severity. The residual deviance and the chi-square score statistic indicated a good fit of the model to the data and the odds had a constant proportionality at each threshold. Single-marker analyses detected significant differences among marker genotypes at both marker loci, indicating that these markers were linked to disease resistance QTL. The inclusion of the interaction term after single-marker analysis provided strong evidence of an epistatic interaction between the two QTL. These results indicate that the proportional odds model can be used as an alternative to traditional methods in cases where the response variable consists of an ordinal scale, thus eliminating the problems of heterocedasticity, non-linearity, and the non-normality of residuals often associated with this type of data.Marcadores moleculares têm sido extensivamente usados para o mapeamento de loci de

  9. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    Science.gov (United States)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  10. Developments in consequence modelling of accidental releases of hazardous materials

    NARCIS (Netherlands)

    Boot, H.

    2012-01-01

    The modelling of consequences of releases of hazardous materials in the Netherlands has mainly been based on the “Yellow Book”. Although there is no updated version of this official publication, new insights have been developed during the last decades. This article will give an overview of new

  11. Modeling Employees' Perceptions and Proportional Preferences of Work Locations: The Regular Workplace and Telecommuting Alternatives

    OpenAIRE

    Mokhtarian, Patricia; Bagley, Michael

    2000-01-01

    This paper develops measures of job and workplace perceptions, and examines the importance of those and other measures to the desired proportions of work time at each of three locations: regular workplace, home, and telecommuting center. Using data from 188 participants in the Neighborhood Telecenters Project, four job context perception factors were identified: productivity, job satisfaction, supervisor relationship, and co-worker interaction. Four generic workplace perception factors were i...

  12. Modelling and analysis of piezoelectric cantilever energy harvester for different proof mass and material proportion

    Science.gov (United States)

    Shashank, R.; Harisha, S. K., Dr; Abhishek, M. C.

    2018-02-01

    Energy harvesting using ambient energy sources is one of the fast growing trends in the world, research and development in the area of energy harvesting is moving progressively to get maximum power output from the existing resources. The ambient sources of energy available in the nature are solar energy, wind energy, thermal energy, vibrational energy etc. out of these methods energy harvesting by vibrational energy sources gain more importance due to its nature of not getting influenced by any environmental parameters and its free availability at anytime and anywhere. The project mainly deals with validating the values of voltage and electrical power output of experimentally conducted energy harvester, varying the parameters of the energy harvester and analyse the effect of the parameters on the performance of the energy harvester and compare the results. The cantilever beam was designed, analysed and simulated using COMSOL multi-physics software. The energy harvester gives an electrical output voltage of the 2.75 volts at a natural frequency of 37.2 Hz and an electrical power of 29μW. Decreasing the percentage of the piezoelectric material and simultaneously increasing the percentage of polymer material (so that total percentage of proportion remains same) increases the electrical voltage and decreases the natural frequency of the beam linearly upto 3.9V and 28.847 Hz till the percentage proportion of the beam was 24% piezoelectric beam and 76% polymer beam when the percentage proportion increased to 26% and 74% natural frequency goes on decreases further but voltage suddenly drops to 2.8V. The voltage generated by energy harvester increases proportionally and reaches 3.7V until weight of the proof mass reaches 4 grams and further increase in the weight of the proof mass decreases the voltage generated by energy harvester. Thus the investigation conveys that the weight of the proof mass and the length of the cantilever beam should be optimised to obtain maximum

  13. The 2014 United States National Seismic Hazard Model

    Science.gov (United States)

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.

    2015-01-01

    New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.

  14. A New Seismic Hazard Model for Mainland China

    Science.gov (United States)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  15. Modeling of Marine Natural Hazards in the Lesser Antilles

    Science.gov (United States)

    Zahibo, Narcisse; Nikolkina, Irina; Pelinovsky, Efim

    2010-05-01

    The Caribbean Sea countries are often affected by various marine natural hazards: hurricanes and cyclones, tsunamis and flooding. The historical data of marine natural hazards for the Lesser Antilles and specially, for Guadeloupe are presented briefly. Numerical simulation of several historical tsunamis in the Caribbean Sea (1755 Lisbon trans-Atlantic tsunami, 1867 Virgin Island earthquake tsunami, 2003 Montserrat volcano tsunami) are performed within the framework of the nonlinear-shallow theory. Numerical results demonstrate the importance of the real bathymetry variability with respect to the direction of propagation of tsunami wave and its characteristics. The prognostic tsunami wave height distribution along the Caribbean Coast is computed using various forms of seismic and hydrodynamics sources. These results are used to estimate the far-field potential for tsunami hazards at coastal locations in the Caribbean Sea. The nonlinear shallow-water theory is also applied to model storm surges induced by tropical cyclones, in particular, cyclones "Lilli" in 2002 and "Dean" in 2007. Obtained results are compared with observed data. The numerical models have been tested against known analytical solutions of the nonlinear shallow-water wave equations. Obtained results are described in details in [1-7]. References [1] N. Zahibo and E. Pelinovsky, Natural Hazards and Earth System Sciences, 1, 221 (2001). [2] N. Zahibo, E. Pelinovsky, A. Yalciner, A. Kurkin, A. Koselkov and A. Zaitsev, Oceanologica Acta, 26, 609 (2003). [3] N. Zahibo, E. Pelinovsky, A. Kurkin and A. Kozelkov, Science Tsunami Hazards. 21, 202 (2003). [4] E. Pelinovsky, N. Zahibo, P. Dunkley, M. Edmonds, R. Herd, T. Talipova, A. Kozelkov and I. Nikolkina, Science of Tsunami Hazards, 22, 44 (2004). [5] N. Zahibo, E. Pelinovsky, E. Okal, A. Yalciner, C. Kharif, T. Talipova and A. Kozelkov, Science of Tsunami Hazards, 23, 25 (2005). [6] N. Zahibo, E. Pelinovsky, T. Talipova, A. Rabinovich, A. Kurkin and I

  16. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  17. Religiousness and hazardous alcohol use: a conditional indirect effects model.

    Science.gov (United States)

    Jankowski, Peter J; Hardy, Sam A; Zamboanga, Byron L; Ham, Lindsay S

    2013-08-01

    The current study examined a conditional indirect effects model of the association between religiousness and adolescents' hazardous alcohol use. In doing so, we responded to the need to include both mediators and moderators, and the need for theoretically informed models when examining religiousness and adolescents' alcohol use. The sample consisted of 383 adolescents, aged 15-18, who completed an online questionnaire. Results of structural equation modeling supported the proposed model. Religiousness was indirectly associated with hazardous alcohol use through both positive alcohol expectancy outcomes and negative alcohol expectancy valuations. Significant moderating effects for alcohol expectancy valuations on the association between alcohol expectancies and alcohol use were also found. The effects for alcohol expectancy valuations confirm valuations as a distinct construct to that of alcohol expectancy outcomes, and offer support for the protective role of internalized religiousness on adolescents' hazardous alcohol use as a function of expectancy valuations. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Rockfall hazard analysis using LiDAR and spatial modeling

    Science.gov (United States)

    Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho

    2010-05-01

    Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.

  19. Defaultable Game Options in a Hazard Process Model

    Directory of Open Access Journals (Sweden)

    Tomasz R. Bielecki

    2009-01-01

    Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.

  20. Worklife expectancy in a cohort of Danish employees aged 55-65 years - comparing a multi-state Cox proportional hazard approach with conventional multi-state life tables.

    Science.gov (United States)

    Pedersen, Jacob; Bjorner, Jakob Bue

    2017-11-15

    Work life expectancy (WLE) expresses the expected time a person will remain in the labor market until he or she retires. This paper compares a life table approach to estimating WLE to an approach based on multi-state proportional hazards models. The two methods are used to estimate WLE in Danish members and non-members of an early retirement pensioning (ERP) scheme according to levels of health. In 2008, data on self-rated health (SRH) was collected from 5212 employees 55-65 years of age. Data on previous and subsequent long-term sickness absence, unemployment, returning to work, and disability pension was collected from national registers. WLE was estimated from multi-state life tables and through multi-state models. Results from the multi-state model approach agreed with the life table approach but provided narrower confidence intervals for small groups. The shortest WLE was seen for employees with poor SRH and ERP membership while the longest WLE was seen for those with good SRH and no ERP membership. Employees aged 55-56 years with poor SRH but no ERP membership had shorter WLE than employees with good SRH and ERP membership. Relative WLE reversed for the two groups after age 57. At age 55, employees with poor SRH could be expected to spend approximately 12 months on long-term sick leave and 9-10 months unemployed before they retired - regardless of ERP membership. ERP members with poor SRH could be expected to spend 4.6 years working, while non-members could be expected to spend 7.1 years working. WLE estimated through multi-state models provided an effective way to summarize complex data on labor market affiliation. WLE differed noticeably between members and non-members of the ERP scheme. It has been hypothesized that while ERP membership would prompt some employees to retire earlier than they would have done otherwise, this effect would be partly offset by reduced time spent on long-term sick leave or unemployment. Our data showed no indication of

  1. Worklife expectancy in a cohort of Danish employees aged 55–65 years - comparing a multi-state Cox proportional hazard approach with conventional multi-state life tables

    Directory of Open Access Journals (Sweden)

    Jacob Pedersen

    2017-11-01

    Full Text Available Abstract Background Work life expectancy (WLE expresses the expected time a person will remain in the labor market until he or she retires. This paper compares a life table approach to estimating WLE to an approach based on multi-state proportional hazards models. The two methods are used to estimate WLE in Danish members and non-members of an early retirement pensioning (ERP scheme according to levels of health. Methods In 2008, data on self-rated health (SRH was collected from 5212 employees 55–65 years of age. Data on previous and subsequent long-term sickness absence, unemployment, returning to work, and disability pension was collected from national registers. WLE was estimated from multi-state life tables and through multi-state models. Results Results from the multi-state model approach agreed with the life table approach but provided narrower confidence intervals for small groups. The shortest WLE was seen for employees with poor SRH and ERP membership while the longest WLE was seen for those with good SRH and no ERP membership. Employees aged 55–56 years with poor SRH but no ERP membership had shorter WLE than employees with good SRH and ERP membership. Relative WLE reversed for the two groups after age 57. At age 55, employees with poor SRH could be expected to spend approximately 12 months on long-term sick leave and 9–10 months unemployed before they retired – regardless of ERP membership. ERP members with poor SRH could be expected to spend 4.6 years working, while non-members could be expected to spend 7.1 years working. Conclusion WLE estimated through multi-state models provided an effective way to summarize complex data on labor market affiliation. WLE differed noticeably between members and non-members of the ERP scheme. It has been hypothesized that while ERP membership would prompt some employees to retire earlier than they would have done otherwise, this effect would be partly offset by reduced time spent on

  2. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    Science.gov (United States)

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  3. A Bayesian Combined Model for Time-Dependent Turning Movement Proportions Estimation at Intersections

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2014-01-01

    Full Text Available Time-dependent turning movement flows are very important input data for intelligent transportation systems but are impossible to be detected directly through current traffic surveillance systems. Existing estimation models have proved to be not accurate and reliable enough during all intervals. An improved way to address this problem is to develop a combined model framework that can integrate multiple submodels running simultaneously. This paper first presents a back propagation neural network model to estimate dynamic turning movements, as well as the self-adaptive learning rate approach and the gradient descent with momentum method for solving. Second, this paper develops an efficient Kalman filtering model and designs a revised sequential Kalman filtering algorithm. Based on the Bayesian method using both historical data and currently estimated results for error calibration, this paper further integrates above two submodels into a Bayesian combined model framework and proposes a corresponding algorithm. A field survey is implemented at an intersection in Beijing city to collect both time series of link counts and actual time-dependent turning movement flows, including historical and present data. The reported estimation results show that the Bayesian combined model is much more accurate and stable than other models.

  4. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  5. COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING

    Directory of Open Access Journals (Sweden)

    N. Mijani

    2017-09-01

    Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  6. Non-proportional deformation paths for sheet metal: experiments and models

    OpenAIRE

    van den Boogaard, Antonius H.; van Riel, M.; Hora, P.

    2009-01-01

    For mild steel, after significant plastic deformation in one direction, a subsequent deformation in an orthogonal direction shows a typical stress overshoot compared to monotonic deformation. This phenomenon is investigated experimentally and numerically on a DC06 material. Two models that incorporate the observed overshoot are compared. In the Teodosiu-Hu model, pre-strain influences the rate of kinematic hardening by a rather complex set of evolution equations. The shape of the elastic doma...

  7. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  8. Mathematical Practice in Textbooks Analysis: Praxeological Reference Models, the Case of Proportion

    Science.gov (United States)

    Wijayanti, Dyana; Winsløw, Carl

    2017-01-01

    We present a new method in textbook analysis, based on so-called praxeological reference models focused on specific content at task level. This method implies that the mathematical contents of a textbook (or textbook part) is analyzed in terms of the tasks and techniques which are exposed to or demanded from readers; this can then be interpreted…

  9. Non-proportional deformation paths for sheet metal: experiments and models

    NARCIS (Netherlands)

    van den Boogaard, Antonius H.; van Riel, M.; Hora, P.

    2009-01-01

    For mild steel, after significant plastic deformation in one direction, a subsequent deformation in an orthogonal direction shows a typical stress overshoot compared to monotonic deformation. This phenomenon is investigated experimentally and numerically on a DC06 material. Two models that

  10. Report 6: Guidance document. Man-made hazards and Accidental Aircraft Crash hazards modelling and implementation in extended PSA

    International Nuclear Information System (INIS)

    Kahia, S.; Brinkman, H.; Bareith, A.; Siklossy, T.; Vinot, T.; Mateescu, T.; Espargilliere, J.; Burgazzi, L.; Ivanov, I.; Bogdanov, D.; Groudev, P.; Ostapchuk, S.; Zhabin, O.; Stojka, T.; Alzbutas, R.; Kumar, M.; Nitoi, M.; Farcasiu, M.; Borysiewicz, M.; Kowal, K.; Potempski, S.

    2016-01-01

    The goal of this report is to provide guidance on practices to model man-made hazards (mainly external fires and explosions) and accidental aircraft crash hazards and implement them in extended Level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the first ASAMPSA-E End Users Workshop (May 2014, Uppsala, Sweden). The objective of WP22 is to provide the solutions for purposes of different parts of man-made hazards Level 1 PSA fulfilment. This guidance is focusing on man-made hazards, namely: external fires and explosions, and accidental aircraft crash hazards. Guidance developed refers to existing guidance whenever possible. The initial part of guidance (WP21 part) reflects current practices to assess the frequencies for each type of hazards or combination of hazards (including correlated hazards) as initiating event for PSAs. The sources and quality of hazard data, the elements of hazard assessment methodologies and relevant examples are discussed. Classification and criteria to properly assess hazard combinations as well as examples and methods for assessment of these combinations are included in this guidance. In appendixes additional material is presented with the examples of practical approaches to aircraft crash and man-made hazard. The following issues are addressed: 1) Hazard assessment methodologies, including issues related to hazard combinations. 2) Modelling equipment of safety related SSC, 3) HRA, 4) Emergency response, 5) Multi-unit issues. Recommendations and also limitations, gaps identified in the existing methodologies and a list of open issues are included. At all stages of this guidance and especially from an industrial end-user perspective, one must keep in mind that the development of man-made hazards probabilistic analysis must be conditioned to the ability to ultimately obtain a representative risk

  11. Numerical modeling of working of a multicellular proportional counter aimed to individual dosimetry of neutrons

    International Nuclear Information System (INIS)

    Bordy, J.M.; Barthe, J.; Boutruche, B.

    1993-01-01

    The use of a personal dosimeter imposes severe constraints, particularly for tension of polarization and tolerable dimensions. That why a numerical modeling of this detector working is an appreciable help for conception. It allows to determine quickly the influence of modification of different parameters (nature and pressure of gas, dimension of electrodes, dimension of channels, tension of polarization,...) without having to make new prototypes. The aim of this report is to give some numerical results got with a multicellular counter with a cylindrical geometry. 6 figs

  12. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  13. Education and risk of coronary heart disease: Assessment of mediation by behavioural risk factors using the additive hazards model

    DEFF Research Database (Denmark)

    Nordahl, H; Rod, NH; Frederiksen, BL

    2013-01-01

    seven Danish cohort studies were linked to registry data on education and incidence of CHD. Mediation by smoking, low physical activity, and body mass index (BMI) on the association between education and CHD were estimated by applying newly proposed methods for mediation based on the additive hazards...... % CI: 12, 22) for women and 37 (95 % CI: 28, 46) for men could be ascribed to the pathway through smoking. Further, 39 (95 % CI: 30, 49) cases for women and 94 (95 % CI: 79, 110) cases for men could be ascribed to the pathway through BMI. The effects of low physical activity were negligible. Using...... contemporary methods, the additive hazards model, for mediation we indicated the absolute numbers of CHD cases prevented when modifying smoking and BMI. This study confirms previous claims based on the Cox proportional hazards model that behavioral risk factors partially mediates the effect of education on CHD...

  14. Global Drought Proportional Economic Loss Risk Deciles

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Drought Proportional Economic Loss Risk Deciles is a 2.5 minute grid of drought hazard economic loss as proportions of Gross Domestic Product (GDP) per...

  15. A decision model for the risk management of hazardous processes

    International Nuclear Information System (INIS)

    Holmberg, J.E.

    1997-03-01

    A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)

  16. Modeling emergency evacuation for major hazard industrial sites

    International Nuclear Information System (INIS)

    Georgiadou, Paraskevi S.; Papazoglou, Ioannis A.; Kiranoudis, Chris T.; Markatos, Nikolaos C.

    2007-01-01

    A model providing the temporal and spatial distribution of the population under evacuation around a major hazard facility is developed. A discrete state stochastic Markov process simulates the movement of the evacuees. The area around the hazardous facility is divided into nodes connected among themselves with links representing the road system of the area. Transition from node-to-node is simulated as a random process where the probability of transition depends on the dynamically changed states of the destination and origin nodes and on the link between them. Solution of the Markov process provides the expected distribution of the evacuees in the nodes of the area as a function of time. A Monte Carlo solution of the model provides in addition a sample of actual trajectories of the evacuees. This information coupled with an accident analysis which provides the spatial and temporal distribution of the extreme phenomenon following an accident, determines a sample of the actual doses received by the evacuees. Both the average dose and the actual distribution of doses are then used as measures in evaluating alternative emergency response strategies. It is shown that in some cases the estimation of the health consequences by the average dose might be either too conservative or too non-conservative relative to the one corresponding to the distribution of the received dose and hence not a suitable measure to evaluate alternative evacuation strategies

  17. Household hazardous waste disposal to landfill: using LandSim to model leachate migration.

    Science.gov (United States)

    Slack, Rebecca J; Gronow, Jan R; Hall, David H; Voulvoulis, Nikolaos

    2007-03-01

    Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW.

  18. Opinion: The use of natural hazard modeling for decision making under uncertainty

    Science.gov (United States)

    David E. Calkin; Mike Mentis

    2015-01-01

    Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex...

  19. Use of a Bayesian isotope mixing model to estimate proportional contributions of multiple nitrate sources in surface water

    International Nuclear Information System (INIS)

    Xue Dongmei; De Baets, Bernard; Van Cleemput, Oswald; Hennessy, Carmel; Berglund, Michael; Boeckx, Pascal

    2012-01-01

    To identify different NO 3 − sources in surface water and to estimate their proportional contribution to the nitrate mixture in surface water, a dual isotope and a Bayesian isotope mixing model have been applied for six different surface waters affected by agriculture, greenhouses in an agricultural area, and households. Annual mean δ 15 N–NO 3 − were between 8.0 and 19.4‰, while annual mean δ 18 O–NO 3 − were given by 4.5–30.7‰. SIAR was used to estimate the proportional contribution of five potential NO 3 − sources (NO 3 − in precipitation, NO 3 − fertilizer, NH 4 + in fertilizer and rain, soil N, and manure and sewage). SIAR showed that “manure and sewage” contributed highest, “soil N”, “NO 3 − fertilizer” and “NH 4 + in fertilizer and rain” contributed middle, and “NO 3 − in precipitation” contributed least. The SIAR output can be considered as a “fingerprint” for the NO 3 − source contributions. However, the wide range of isotope values observed in surface water and of the NO 3 − sources limit its applicability. - Highlights: ► The dual isotope approach (δ 15 N- and δ 18 O–NO 3 − ) identify dominant nitrate sources in 6 surface waters. ► The SIAR model estimate proportional contributions for 5 nitrate sources. ► SIAR is a reliable approach to assess temporal and spatial variations of different NO 3 − sources. ► The wide range of isotope values observed in surface water and of the nitrate sources limit its applicability. - This paper successfully applied a dual isotope approach and Bayesian isotopic mixing model to identify and quantify 5 potential nitrate sources in surface water.

  20. A model for the operation of helium-filled proportional counter at low temperatures near 4.2 K

    International Nuclear Information System (INIS)

    Masaoka, Sei; Katano, Rintaro; Kishimoto, Shunji; Isozumi, Yasuhito

    2000-01-01

    In order to understand the operation of helium-filled proportional counter (HFPC) from the standpoint of fundamental atomic and molecular processes, we have surveyed previous works on collision processes in discharged helium gas. By analyzing gas gain curve, after-pulses and discharge current experimentally observed at 4.2 K, the electron avalanche and the secondary electron emission from cathode have been related to the collision processes in helium. A simplified model for the HFPC operation at low temperatures near 4.2 K has been constructed with the related processes

  1. VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation

    Science.gov (United States)

    Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.

    2009-12-01

    Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the

  2. A structure for models of hazardous materials with complex behavior

    International Nuclear Information System (INIS)

    Rodean, H.C.

    1991-01-01

    Most atmospheric dispersion models used to assess the environmental consequences of accidental releases of hazardous chemicals do not have the capability to simulate the pertinent chemical and physical processes associated with the release of the material and its mixing with the atmosphere. The purpose of this paper is to present a materials sub-model with the flexibility to simulate the chemical and physical behaviour of a variety of materials released into the atmosphere. The model, which is based on thermodynamic equilibrium, incorporates the ideal gas law, temperature-dependent vapor pressure equations, temperature-dependent dissociation reactions, and reactions with atmospheric water vapor. The model equations, written in terms of pressure ratios and dimensionless parameters, are used to construct equilibrium diagrams with temperature and the mass fraction of the material in the mixture as coordinates. The model's versatility is demonstrated by its application to the release of UF 6 and N 2 O 4 , two materials with very different physical and chemical properties. (author)

  3. Preliminary deformation model for National Seismic Hazard map of Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)

    2015-04-24

    Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.

  4. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    NARCIS (Netherlands)

    Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.

    2017-01-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood

  5. Household hazardous waste disposal to landfill: Using LandSim to model leachate migration

    International Nuclear Information System (INIS)

    Slack, Rebecca J.; Gronow, Jan R.; Hall, David H.; Voulvoulis, Nikolaos

    2007-01-01

    Municipal solid waste (MSW) landfill leachate contains a number of aquatic pollutants. A specific MSW stream often referred to as household hazardous waste (HHW) can be considered to contribute a large proportion of these pollutants. This paper describes the use of the LandSim (Landfill Performance Simulation) modelling program to assess the environmental consequences of leachate release from a generic MSW landfill in receipt of co-disposed HHW. Heavy metals and organic pollutants were found to migrate into the zones beneath a model landfill site over a 20,000-year period. Arsenic and chromium were found to exceed European Union and US-EPA drinking water standards at the unsaturated zone/aquifer interface, with levels of mercury and cadmium exceeding minimum reporting values (MRVs). The findings demonstrate the pollution potential arising from HHW disposal with MSW. - Aquatic pollutants linked to the disposal of household hazardous waste in municipal landfills have the potential to exist in soil and groundwater for many years

  6. Numerical Modelling of Extreme Natural Hazards in the Russian Seas

    Science.gov (United States)

    Arkhipkin, Victor; Dobrolyubov, Sergey; Korablina, Anastasia; Myslenkov, Stanislav; Surkova, Galina

    2017-04-01

    Storm surges and extreme waves are severe natural sea hazards. Due to the almost complete lack of natural observations of these phenomena in the Russian seas (Caspian, Black, Azov, Baltic, White, Barents, Okhotsk, Kara), especially about their formation, development and destruction, they have been studied using numerical simulation. To calculate the parameters of wind waves for the seas listed above, except the Barents Sea, the spectral model SWAN was applied. For the Barents and Kara seas we used WAVEWATCH III model. Formation and development of storm surges were studied using ADCIRC model. The input data for models - bottom topography, wind, atmospheric pressure and ice cover. In modeling of surges in the White and Barents seas tidal level fluctuations were used. They have been calculated from 16 harmonic constant obtained from global atlas tides FES2004. Wind, atmospheric pressure and ice cover was taken from the NCEP/NCAR reanalysis for the period from 1948 to 2010, and NCEP/CFSR reanalysis for the period from 1979 to 2015. In modeling we used both regular and unstructured grid. The wave climate of the Caspian, Black, Azov, Baltic and White seas was obtained. Also the extreme wave height possible once in 100 years has been calculated. The statistics of storm surges for the White, Barents and Azov Seas were evaluated. The contribution of wind and atmospheric pressure in the formation of surges was estimated. The technique of climatic forecast frequency of storm synoptic situations was developed and applied for every sea. The research was carried out with financial support of the RFBR (grant 16-08-00829).

  7. A modeling framework for investment planning in interdependent infrastructures in multi-hazard environments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael

    2013-09-01

    Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.

  8. A Study of the Impact of Underground Economy on Integral Tax Burden in the Proportional Growth Model under Uncertainty

    Directory of Open Access Journals (Sweden)

    Akif Musayev

    2018-01-01

    Full Text Available Economic processes are naturally characterized by imprecise and uncertain relevant information. One of the main reasons is existence of an underground economy. However, in existing works, real-world imprecision and uncertainty of economic conditions are not taken into account. In this paper we consider a problem of calculation of a taxation base to assess tax burden for proportionally growing economy under uncertainty. In order to account for imprecision and uncertainty of economic processes, we use the theory of fuzzy sets. A fuzzy integral equation is used to identify an integral tax burden taking into account the contribution of the underground economy for a certain financial (tax year. It is also assumed that dynamics of gross domestic product are modeled by fuzzy linear differential equation. An optimal value of tax burden is determined as a solution to the considered fuzzy integral equation. An example is provided to illustrate validity of the proposed study.

  9. Improvement of a Mixture Experiment Model Relating the Component Proportions to the Size of Nanonized Itraconazole Particles in Extemporary Suspensions

    Energy Technology Data Exchange (ETDEWEB)

    Pattarino, Franco; Piepel, Gregory F.; Rinaldi, Maurizio

    2018-05-01

    The Foglio Bonda et al. (2016) (henceforth FB) paper discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (the response variable of interest). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). After publication of the FB paper, the second author of this corrigendum (not an author of the original paper) contacted the corresponding author to point out some errors as well as insufficient explanations in parts of the paper. This corrigendum was prepared to address these issues. The authors of the original paper apologize for any inconveniences to readers.

  10. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  11. Comparison of Cox and Gray's survival models in severe sepsis

    DEFF Research Database (Denmark)

    Kasal, Jan; Andersen, Zorana Jovanovic; Clermont, Gilles

    2004-01-01

    Although survival is traditionally modeled using Cox proportional hazards modeling, this approach may be inappropriate in sepsis, in which the proportional hazards assumption does not hold. Newer, more flexible models, such as Gray's model, may be more appropriate....

  12. Multi-level modelling of the response of the ultraminiature proportional counter: gas gain phenomena and pulse height spectra

    International Nuclear Information System (INIS)

    Olko, P.; Moutarde, C.; Segur, P.

    1995-01-01

    The ultraminiature proportional counters, UMC, unique radiation detectors for monitoring high intensity therapy fields, designed by Kliauga and operated at Columbia University (USA), have yielded a number of pulse height distributions for photons, neutrons and ions at simulated diameters of 5-50 nm. Monte Carlo calculations of the gas gain in such a counter questioned the possibility of achieving proportionally at such low simulated diameters. The response of the UMC has now been modelled taking into account both fluctuations of energy deposited in the counter volume and its calculated gas gain. Energy deposition was calculated using the MOCA-14, MOCA-8 and TRION codes, whereby distributions of ionisations d(j) after irradiations with 137 Cs, 15 MeV neutrons and 7 MeV.amu -1 deuterons were obtained. Monte Carlo calculations of electron avalanches in UMC show that the size of the single-electron avalanche P(n) reaching the anode depends strongly on the location of the primary ionisation within the counter volume. Distributions of the size of electron avalanches for higher numbers of primary ionisations, P *j (n), were obtained by successive convolutions of P(n). Finally, the counter response was obtained by weighting P *j (n) over d(j) distributions. On comparing the measured and calculated spectra it was concluded that the previously proposed single-electron peak calibration method might not be valid for the UMC due to the excessive width and overlap of electron avalanche distributions. Better agreement between the measured and calculated spectra is found if broader electron avalanche distributions than those used in the present calculations, are assumed. (author)

  13. A mental models approach to exploring perceptions of hazardous processes

    International Nuclear Information System (INIS)

    Bostrom, A.H.H.

    1990-01-01

    Based on mental models theory, a decision-analytic methodology is developed to elicit and represent perceptions of hazardous processes. An application to indoor radon illustrates the methodology. Open-ended interviews were used to elicit non-experts' perceptions of indoor radon, with explicit prompts for knowledge about health effects, exposure processes, and mitigation. Subjects then sorted photographs into radon-related and unrelated piles, explaining their rationale aloud as they sorted. Subjects demonstrated a small body of correct but often unspecific knowledge about exposure and effects processes. Most did not mention radon-decay processes, and seemed to rely on general knowledge about gases, radioactivity, or pollution to make inferences about radon. Some held misconceptions about contamination and health effects resulting from exposure to radon. In two experiments, subjects reading brochures designed according to the author's guidelines outperformed subjects reading a brochure distributed by the EPA on a diagnostic test, and did at least as well on an independently designed quiz. In both experiments, subjects who read any one of the brochures had more complete and correct knowledge about indoor radon than subjects who did not, whose knowledge resembled the radon-interview subjects'

  14. Evaluating the hazard from Siding Spring dust: Models and predictions

    Science.gov (United States)

    Christou, A.

    2014-12-01

    Long-period comet C/2013 A1 (Siding Spring) will pass at a distance of ~140 thousand km (9e-4 AU) - about a third of a lunar distance - from the centre of Mars, closer to this planet than any known comet has come to the Earth since records began. Closest approach is expected to occur at 18:30 UT on the 19th October. This provides an opportunity for a ``free'' flyby of a different type of comet than those investigated by spacecraft so far, including comet 67P/Churyumov-Gerasimenko currently under scrutiny by the Rosetta spacecraft. At the same time, the passage of the comet through Martian space will create the opportunity to study the reaction of the planet's upper atmosphere to a known natural perturbation. The flip-side of the coin is the risk to Mars-orbiting assets, both existing (NASA's Mars Odyssey & Mars Reconnaissance Orbiter and ESA's Mars Express) and in transit (NASA's MAVEN and ISRO's Mangalyaan) by high-speed cometary dust potentially impacting spacecraft surfaces. Much work has already gone into assessing this hazard and devising mitigating measures in the precious little warning time given to characterise this object until Mars encounter. In this presentation, we will provide an overview of how the meteoroid stream and comet coma dust impact models evolved since the comet's discovery and discuss lessons learned should similar circumstances arise in the future.

  15. Modelling Inland Flood Events for Hazard Maps in Taiwan

    Science.gov (United States)

    Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.

    2015-12-01

    Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage

  16. Conceptual geoinformation model of natural hazards risk assessment

    Science.gov (United States)

    Kulygin, Valerii

    2016-04-01

    Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.

  17. Calculations of the response functions of Bonner spheres with a spherical 3He proportional counter using a realistic detector model

    International Nuclear Information System (INIS)

    Wiegel, B.; Alevra, A.V.; Siebert, B.R.L.

    1994-11-01

    A realistic geometry model of a Bonner sphere system with a spherical 3 He-filled proportional counter and 12 polyethylene moderating spheres with diameters ranging from 7,62 cm (3'') to 45,72 cm (18'') is introduced. The MCNP Monte Carlo computer code is used to calculate the responses of this Bonner sphere system to monoenergetic neutrons in the energy range between 1 meV to 20 MeV. The relative uncertainties of the responses due to the Monte Carlo calculations are less than 1% for spheres up to 30,48 cm (12'') in diameter and less than 2% for the 15'' and 18'' spheres. Resonances in the carbon cross section are seen as significant structures in the response functions. Additional calculations were made to study the influence of the 3 He number density and the polyethylene mass density on the response as well as the angular dependence of the Bonner sphere system. The calculated responses can be adjusted to a large set of calibration measurements with only a single fit factor common to all sphere diameters and energies. (orig.) [de

  18. Modelling the costs of natural hazards in games

    Science.gov (United States)

    Bostenaru-Dan, M.

    2012-04-01

    City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban

  19. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  20. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  1. Understanding Recession and Self-Rated Health with the Partial Proportional Odds Model: An Analysis of 26 Countries.

    Science.gov (United States)

    Mayer, Adam; Foster, Michelle

    2015-01-01

    Self-rated health is demonstrated to vary substantially by both personal socio-economic status and national economic conditions. However, studies investigating the combined influence of individual and country level economic indicators across several countries in the context of recent global recession are limited. This paper furthers our knowledge of the effect of recession on health at both the individual and national level. Using the Life in Transition II study, which provides data from 19,759 individuals across 26 European nations, we examine the relationship between self-rated health, personal economic experiences, and macro-economic change. Data analyses include, but are not limited to, the partial proportional odds model which permits the effect of predictors to vary across different levels of our dependent variable. Household experiences with recession, especially a loss of staple good consumption, are associated with lower self-rated health. Most individual-level experiences with recession, such as a job loss, have relatively small negative effects on perceived health; the effect of individual or household economic hardship is strongest in high income nations. Our findings also suggest that macroeconomic growth improves self-rated health in low-income nations but has no effect in high-income nations. Individuals with the greatest probability of "good" self-rated health reside in wealthy countries ($23,910 to $50, 870 GNI per capita). Both individual and national economic variables are predictive of self-rated health. Personal and household experiences are most consequential for self-rated health in high income nations, while macroeconomic growth is most consequential in low-income nations.

  2. Understanding Recession and Self-Rated Health with the Partial Proportional Odds Model: An Analysis of 26 Countries.

    Directory of Open Access Journals (Sweden)

    Adam Mayer

    Full Text Available Self-rated health is demonstrated to vary substantially by both personal socio-economic status and national economic conditions. However, studies investigating the combined influence of individual and country level economic indicators across several countries in the context of recent global recession are limited. This paper furthers our knowledge of the effect of recession on health at both the individual and national level.Using the Life in Transition II study, which provides data from 19,759 individuals across 26 European nations, we examine the relationship between self-rated health, personal economic experiences, and macro-economic change. Data analyses include, but are not limited to, the partial proportional odds model which permits the effect of predictors to vary across different levels of our dependent variable.Household experiences with recession, especially a loss of staple good consumption, are associated with lower self-rated health. Most individual-level experiences with recession, such as a job loss, have relatively small negative effects on perceived health; the effect of individual or household economic hardship is strongest in high income nations. Our findings also suggest that macroeconomic growth improves self-rated health in low-income nations but has no effect in high-income nations. Individuals with the greatest probability of "good" self-rated health reside in wealthy countries ($23,910 to $50, 870 GNI per capita.Both individual and national economic variables are predictive of self-rated health. Personal and household experiences are most consequential for self-rated health in high income nations, while macroeconomic growth is most consequential in low-income nations.

  3. Quantitative physical models of volcanic phenomena for hazards assessment of critical infrastructures

    Science.gov (United States)

    Costa, Antonio

    2016-04-01

    Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.

  4. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains

    International Nuclear Information System (INIS)

    Kraus, N.N.; Slovic, P.

    1988-01-01

    Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions

  5. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    Energy Technology Data Exchange (ETDEWEB)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.

  6. Computer models used to support cleanup decision-making at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.

    1992-07-01

    Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study

  7. The 2014 update to the National Seismic Hazard Model in California

    Science.gov (United States)

    Powers, Peter; Field, Edward H.

    2015-01-01

    The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.

  8. A double moral hazard model of organization design

    OpenAIRE

    Berkovitch, Elazar; Israel, Ronen; Spiegel, Yossi

    2007-01-01

    We develop a theory of organization design in which the firm's structure is chosen to mitigate moral hazard problems in the selection and the implementation of projects. For a given set of projects, the 'divisional structure' which gives each agent the full responsibility over a subset of projects is in general more efficient than the functional structure under which projects are implemented by teams of agents, each of whom specializes in one task. However, the ex post efficiency of the divis...

  9. Report 3: Guidance document on practices to model and implement Extreme Weather hazards in extended PSA

    International Nuclear Information System (INIS)

    Alzbutas, R.; Ostapchuk, S.; Borysiewicz, M.; Decker, K.; Kumar, Manorma; Haeggstroem, A.; Nitoi, M.; Groudev, P.; Parey, S.; Potempski, S.; Raimond, E.; Siklossy, T.

    2016-01-01

    The goal of this report is to provide guidance on practices to model Extreme Weather hazards and implement them in extended level 1 PSA. This report is a joint deliverable of work package 21 (WP21) and work package 22 (WP22). The general objective of WP21 is to provide guidance on all of the individual hazards selected at the End Users Workshop. This guidance is focusing on extreme weather hazards, namely: extreme wind, extreme temperature and snow pack. Other hazards, however, are considered in cases where they are correlated/ associated with the hazard under discussion. Guidance developed refers to existing guidance whenever possible. As it was recommended by end users this guidance covers questions of developing integrated and/or separated extreme weathers PSA models. (authors)

  10. Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy

    Directory of Open Access Journals (Sweden)

    J. Blahut

    2010-11-01

    Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise

  11. The anchor-based minimal important change, based on receiver operating characteristic analysis or predictive modeling, may need to be adjusted for the proportion of improved patients.

    Science.gov (United States)

    Terluin, Berend; Eekhout, Iris; Terwee, Caroline B

    2017-03-01

    Patients have their individual minimal important changes (iMICs) as their personal benchmarks to determine whether a perceived health-related quality of life (HRQOL) change constitutes a (minimally) important change for them. We denote the mean iMIC in a group of patients as the "genuine MIC" (gMIC). The aims of this paper are (1) to examine the relationship between the gMIC and the anchor-based minimal important change (MIC), determined by receiver operating characteristic analysis or by predictive modeling; (2) to examine the impact of the proportion of improved patients on these MICs; and (3) to explore the possibility to adjust the MIC for the influence of the proportion of improved patients. Multiple simulations of patient samples involved in anchor-based MIC studies with different characteristics of HRQOL (change) scores and distributions of iMICs. In addition, a real data set is analyzed for illustration. The receiver operating characteristic-based and predictive modeling MICs equal the gMIC when the proportion of improved patients equals 0.5. The MIC is estimated higher than the gMIC when the proportion improved is greater than 0.5, and the MIC is estimated lower than the gMIC when the proportion improved is less than 0.5. Using an equation including the predictive modeling MIC, the log-odds of improvement, the standard deviation of the HRQOL change score, and the correlation between the HRQOL change score and the anchor results in an adjusted MIC reflecting the gMIC irrespective of the proportion of improved patients. Adjusting the predictive modeling MIC for the proportion of improved patients assures that the adjusted MIC reflects the gMIC. We assumed normal distributions and global perceived change scores that were independent on the follow-up score. Additionally, floor and ceiling effects were not taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Proportionality for Military Leaders

    National Research Council Canada - National Science Library

    Brown, Gary D

    2000-01-01

    .... Especially lacking is a realization that there are four distinct types of proportionality. In determining whether a particular resort to war is just, national leaders must consider the proportionality of the conflict (i.e...

  13. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Science.gov (United States)

    Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom

    2017-06-01

    We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  14. Conceptual Development of a National Volcanic Hazard Model for New Zealand

    Directory of Open Access Journals (Sweden)

    Mark Stirling

    2017-06-01

    Full Text Available We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration, and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g., short-term forecasting. The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.

  15. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  16. Macroeconomic Proportions and Corellations

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-02-01

    Full Text Available The work is focusing on the main proportions and correlations which are being set up between the major macroeconomic indicators. This is the general frame for the analysis of the relations between the Gross Domestic Product growth rate and the unemployment rate; the interaction between the inflation rate and the unemployment rate; the connection between the GDP growth rate and the inflation rate. Within the analysis being performed, a particular attention is paid to �the basic relationship of the economic growth� by emphasizing the possibilities as to a factorial analysis of the macroeconomic development, mainly as far as the Gross Domestic Product is concerned. At this point, the authors are introducing the mathematical relations, which are used for modeling the macroeconomic correlations, hence the strictness of the analysis being performed.

  17. Macroeconomic Proportions and Corellations

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-04-01

    Full Text Available The work is focusing on the main proportions and correlations which are being set up between the major macroeconomic indicators. This is the general frame for the analysis of the relations between the Gross Domestic Product growth rate and the unemployment rate; the interaction between the inflation rate and the unemployment rate; the connection between the GDP growth rate and the inflation rate. Within the analysis being performed, a particular attention is paid to “the basic relationship of the economic growth” by emphasizing the possibilities as to a factorial analysis of the macroeconomic development, mainly as far as the Gross Domestic Product is concerned. At this point, the authors are introducing the mathematical relations, which are used for modeling the macroeconomic correlations, hence the strictness of the analysis being performed.

  18. Snakes as hazards: modelling risk by chasing chimpanzees.

    Science.gov (United States)

    McGrew, William C

    2015-04-01

    Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.

  19. Analyzing Right-Censored Length-Biased Data with Additive Hazards Model

    Institute of Scientific and Technical Information of China (English)

    Mu ZHAO; Cun-jie LIN; Yong ZHOU

    2017-01-01

    Length-biased data are often encountered in observational studies,when the survival times are left-truncated and right-censored and the truncation times follow a uniform distribution.In this article,we propose to analyze such data with the additive hazards model,which specifies that the hazard function is the sum of an arbitrary baseline hazard function and a regression function of covariates.We develop estimating equation approaches to estimate the regression parameters.The resultant estimators are shown to be consistent and asymptotically normal.Some simulation studies and a real data example are used to evaluate the finite sample properties of the proposed estimators.

  20. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    Science.gov (United States)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  1. [Hazard evaluation modeling of particulate matters emitted by coal-fired boilers and case analysis].

    Science.gov (United States)

    Shi, Yan-Ting; Du, Qian; Gao, Jian-Min; Bian, Xin; Wang, Zhi-Pu; Dong, He-Ming; Han, Qiang; Cao, Yang

    2014-02-01

    In order to evaluate the hazard of PM2.5 emitted by various boilers, in this paper, segmentation of particulate matters with sizes of below 2. 5 microm was performed based on their formation mechanisms and hazard level to human beings and environment. Meanwhile, taking into account the mass concentration, number concentration, enrichment factor of Hg, and content of Hg element in different coal ashes, a comprehensive model aimed at evaluating hazard of PM2.5 emitted by coal-fired boilers was established in this paper. Finally, through utilizing filed experimental data of previous literatures, a case analysis of the evaluation model was conducted, and the concept of hazard reduction coefficient was proposed, which can be used to evaluate the performance of dust removers.

  2. Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models

    Directory of Open Access Journals (Sweden)

    Yang beibei Ji

    2014-01-01

    Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.

  3. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  4. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    Science.gov (United States)

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in

  5. Teamwork tools and activities within the hazard component of the Global Earthquake Model

    Science.gov (United States)

    Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.

    2013-05-01

    The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.

  6. Ground motion models used in the 2014 U.S. National Seismic Hazard Maps

    Science.gov (United States)

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.

    2015-01-01

    The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.

  7. Three multimedia models used at hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C.; Rambaugh, J.O.; Potter, S.

    1996-02-01

    Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers

  8. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  9. Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank

    2008-11-01

    Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application

  10. Modeling exposure to persistent chemicals in hazard and risk assessment.

    Science.gov (United States)

    Cowan-Ellsberry, Christina E; McLachlan, Michael S; Arnot, Jon A; Macleod, Matthew; McKone, Thomas E; Wania, Frank

    2009-10-01

    Fate and exposure modeling has not, thus far, been explicitly used in the risk profile documents prepared for evaluating the significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of persistent organic pollutants (POP) and persistent, bioaccumulative, and toxic (PBT) chemicals in the environment. The goal of this publication is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include 1) benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk; 2) directly estimating the exposure of the environment, biota, and humans to provide information to complement measurements or where measurements are not available or are limited; 3) to identify the key processes and chemical or environmental parameters that determine the exposure, thereby allowing the effective prioritization of research or measurements to improve the risk profile; and 4) forecasting future time trends, including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and

  11. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.

  12. Modeling and Testing Landslide Hazard Using Decision Tree

    Directory of Open Access Journals (Sweden)

    Mutasem Sh. Alkhasawneh

    2014-01-01

    Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.

  13. Modeling Wildfire Hazard in the Western Hindu Kush-Himalayas

    Science.gov (United States)

    Bylow, D.

    2012-12-01

    Wildfire regimes are a leading driver of global environmental change affecting a diverse array of global ecosystems. Particulates and aerosols produced by wildfires are a primary source of air pollution making the early detection and monitoring of wildfires crucial. The objectives of this study were to model regional wildfire potential and identify environmental, topological, and sociological factors that contribute to the ignition of wildfire events in the Western Hindu Kush-Himalayas of South Asia. The environmental, topological, and sociological factors were used to model regional wildfire potential through multi-criteria evaluation using a method of weighted linear combination. Moderate Resolution Imaging Spectroradiometer (MODIS) and geographic information systems (GIS) data were integrated to analyze regional wildfires and construct the model. Model validation was performed using a holdout cross validation method. The study produced a significant model of wildfire potential in the Western Hindu Kush-Himalayas.; Western Hindu Kush-Himalayas ; Western Hindu Kush-Himalayas Wildfire Potential

  14. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  15. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  16. Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach

    Science.gov (United States)

    van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.

    2015-01-01

    Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0

  17. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    OpenAIRE

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or t...

  18. The Principle of Proportionality

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Meisner Nielsen, Kasper

    2005-01-01

    Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....

  19. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  20. Hazard Response Modeling Uncertainty (A Quantitative Method). Volume 2. Evaluation of Commonly Used Hazardous Gas Dispersion Models

    Science.gov (United States)

    1993-03-01

    the HDA . The model will 89 explicitly account for initial dilution, aerosol evaporation, and entrainment for turbulent jets, which simplifies...D.N., Yohn, J.F., Koopman R.P. and Brown T.C., "Conduct of Anhydrous Hydrofluoric Acid Spill Experiments," Proc. Int. Cqnf. On Vapor Cloud Modeling

  1. Modeling and Prediction of Wildfire Hazard in Southern California, Integration of Models with Imaging Spectrometry

    Science.gov (United States)

    Roberts, Dar A.; Church, Richard; Ustin, Susan L.; Brass, James A. (Technical Monitor)

    2001-01-01

    Large urban wildfires throughout southern California have caused billions of dollars of damage and significant loss of life over the last few decades. Rapid urban growth along the wildland interface, high fuel loads and a potential increase in the frequency of large fires due to climatic change suggest that the problem will worsen in the future. Improved fire spread prediction and reduced uncertainty in assessing fire hazard would be significant, both economically and socially. Current problems in the modeling of fire spread include the role of plant community differences, spatial heterogeneity in fuels and spatio-temporal changes in fuels. In this research, we evaluated the potential of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data for providing improved maps of wildfire fuel properties. Analysis concentrated in two areas of Southern California, the Santa Monica Mountains and Santa Barbara Front Range. Wildfire fuel information can be divided into four basic categories: fuel type, fuel load (live green and woody biomass), fuel moisture and fuel condition (live vs senesced fuels). To map fuel type, AVIRIS data were used to map vegetation species using Multiple Endmember Spectral Mixture Analysis (MESMA) and Binary Decision Trees. Green live biomass and canopy moisture were mapped using AVIRIS through analysis of the 980 nm liquid water absorption feature and compared to alternate measures of moisture and field measurements. Woody biomass was mapped using L and P band cross polarimetric data acquired in 1998 and 1999. Fuel condition was mapped using spectral mixture analysis to map green vegetation (green leaves), nonphotosynthetic vegetation (NPV; stems, wood and litter), shade and soil. Summaries describing the potential of hyperspectral and SAR data for fuel mapping are provided by Roberts et al. and Dennison et al. To utilize remotely sensed data to assess fire hazard, fuel-type maps were translated

  2. The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms

    Science.gov (United States)

    Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie

    2009-01-01

    The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with

  3. Eolian Modeling System: Predicting Windblown Dust Hazards in Battlefield Environments

    Science.gov (United States)

    2011-05-03

    trend (i.e., a straight line on log-log scales) given by R ∝ T–α, (1) where R is the accumulation rate, T is the time interval of accumulation, and α...Figure 5(A) for a representative set of model parameters. The straight line labeled by h represents a linear increase in epipedon thickness with time...Pelletier, Frequency-magnitude distribution of eolian transport and the geomorphically most-effective windstorm , submitted but not accepted to Geophysical

  4. The unconvincing product - Consumer versus expert hazard identification: A mental models study of novel foods

    DEFF Research Database (Denmark)

    Hagemann, Kit; Scholderer, Joachim

    and experts understanding of benefits and risks associated with three Novel foods (a potato, rice and functional food ingredients) using a relatively new methodology for the study of risk perception called Mental models. Mental models focus on the way people conceptualise hazardous processes and allows...... researchers to pit a normative analysis (expert mental models) against a descriptive analysis (consumer mental models). Expert models were elicited by means of a three-wave Delphi procedure from altogether 24 international experts and consumers models from in-dept interviews with Danish consumers. The results...... revealed that consumers´ and experts' mental models differed in connection to scope. Experts focused on the types of hazards for which risk assessments can be conducted under current legal frameworks whereas consumers were concerned about issues that lay outside the scope of current legislation. Experts...

  5. Proportion-corrected scaled voxel models for Japanese children and their application to the numerical dosimetry of specific absorption rate for frequencies from 30 MHz to 3 GHz

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Kunieda, Etsuo

    2008-01-01

    The development of high-resolution anatomical voxel models of children is difficult given, inter alia, the ethical limitations on subjecting children to medical imaging. We instead used an existing voxel model of a Japanese adult and three-dimensional deformation to develop three voxel models that match the average body proportions of Japanese children at 3, 5 and 7 years old. The adult model was deformed to match the proportions of a child by using the measured dimensions of various body parts of children at 3, 5 and 7 years old and a free-form deformation technique. The three developed models represent average-size Japanese children of the respective ages. They consist of cubic voxels (2 mm on each side) and are segmented into 51 tissues and organs. We calculated the whole-body-averaged specific absorption rates (WBA-SARs) and tissue-averaged SARs for the child models for exposures to plane waves from 30 MHz to 3 GHz; these results were then compared with those for scaled down adult models. We also determined the incident electric-field strength required to produce the exposure equivalent to the ICNIRP basic restriction for general public exposure, i.e., a WBA-SAR of 0.08 W kg -1 .

  6. Evaluation and hydrological modelization in the natural hazard prevention

    International Nuclear Information System (INIS)

    Pla Sentis, Ildefonso

    2011-01-01

    Soil degradation affects negatively his functions as a base to produce food, to regulate the hydrological cycle and the environmental quality. All over the world soil degradation is increasing partly due to lacks or deficiencies in the evaluations of the processes and causes of this degradation on each specific situation. The processes of soil physical degradation are manifested through several problems as compaction, runoff, hydric and Eolic erosion, landslides with collateral effects in situ and in the distance, often with disastrous consequences as foods, landslides, sedimentations, droughts, etc. These processes are frequently associated to unfavorable changes into the hydrologic processes responsible of the water balance and soil hydric regimes, mainly derived to soil use changes and different management practices and climatic changes. The evaluation of these processes using simple simulation models; under several scenarios of climatic change, soil properties and land use and management; would allow to predict the occurrence of this disastrous processes and consequently to select and apply the appropriate practices of soil conservation to eliminate or reduce their effects. This simulation models require, as base, detailed climatic information and hydrologic soil properties data. Despite of the existence of methodologies and commercial equipment (each time more sophisticated and precise) to measure the different physical and hydrological soil properties related with degradation processes, most of them are only applicable under really specific or laboratory conditions. Often indirect methodologies are used, based on relations or empiric indexes without an adequate validation, that often lead to expensive mistakes on the evaluation of soil degradation processes and their effects on natural disasters. It could be preferred simple field methodologies, direct and adaptable to different soil types and climates and to the sample size and the spatial variability of the

  7. Modelling human interactions in the assessment of man-made hazards

    International Nuclear Information System (INIS)

    Nitoi, M.; Farcasiu, M.; Apostol, M.

    2016-01-01

    The human reliability assessment tools are not currently capable to model adequately the human ability to adapt, to innovate and to manage under extreme situations. The paper presents the results obtained by ICN PSA team in the frame of FP7 Advanced Safety Assessment Methodologies: extended PSA (ASAMPSA_E) project regarding the investigation of conducting HRA in human-made hazards. The paper proposes to use a 4-steps methodology for the assessment of human interactions in the external events (Definition and modelling of human interactions; Quantification of human failure events; Recovery analysis; Review). The most relevant factors with respect to HRA for man-made hazards (response execution complexity; existence of procedures with respect to the scenario in question; time available for action; timing of cues; accessibility of equipment; harsh environmental conditions) are presented and discussed thoroughly. The challenges identified in relation to man-made hazards HRA are highlighted. (authors)

  8. Guidance document on practices to model and implement Earthquake hazards in extended PSA (final version). Volume 1

    International Nuclear Information System (INIS)

    Decker, K.; Hirata, K.; Groudev, P.

    2016-01-01

    The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)

  9. Analogical proportions: another logical view

    Science.gov (United States)

    Prade, Henri; Richard, Gilles

    This paper investigates the logical formalization of a restricted form of analogical reasoning based on analogical proportions, i.e. statements of the form a is to b as c is to d. Starting from a naive set theoretic interpretation, we highlight the existence of two noticeable companion proportions: one states that a is to b the converse of what c is to d (reverse analogy), while the other called paralogical proportion expresses that what a and b have in common, c and d have it also. We identify the characteristic postulates of the three types of proportions and examine their consequences from an abstract viewpoint. We further study the properties of the set theoretic interpretation and of the Boolean logic interpretation, and we provide another light on the understanding of the role of permutations in the modeling of the three types of proportions. Finally, we address the use of these proportions as a basis for inference in a propositional setting, and relate it to more general schemes of analogical reasoning. The differences between analogy, reverse-analogy, and paralogy is still emphasized in a three-valued setting, which is also briefly presented.

  10. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning.

    Directory of Open Access Journals (Sweden)

    Abe D Hofman

    Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.

  11. Contribution of physical modelling to climate-driven landslide hazard mapping: an alpine test site

    Science.gov (United States)

    Vandromme, R.; Desramaut, N.; Baills, A.; Hohmann, A.; Grandjean, G.; Sedan, O.; Mallet, J. P.

    2012-04-01

    The aim of this work is to develop a methodology for integrating climate change scenarios into quantitative hazard assessment and especially their precipitation component. The effects of climate change will be different depending on both the location of the site and the type of landslide considered. Indeed, mass movements can be triggered by different factors. This paper describes a methodology to address this issue and shows an application on an alpine test site. Mechanical approaches represent a solution for quantitative landslide susceptibility and hazard modeling. However, as the quantity and the quality of data are generally very heterogeneous at a regional scale, it is necessary to take into account the uncertainty in the analysis. In this perspective, a new hazard modeling method is developed and integrated in a program named ALICE. This program integrates mechanical stability analysis through a GIS software taking into account data uncertainty. This method proposes a quantitative classification of landslide hazard and offers a useful tool to gain time and efficiency in hazard mapping. However, an expertise approach is still necessary to finalize the maps. Indeed it is the only way to take into account some influent factors in slope stability such as heterogeneity of the geological formations or effects of anthropic interventions. To go further, the alpine test site (Barcelonnette area, France) is being used to integrate climate change scenarios into ALICE program, and especially their precipitation component with the help of a hydrological model (GARDENIA) and the regional climate model REMO (Jacob, 2001). From a DEM, land-cover map, geology, geotechnical data and so forth the program classifies hazard zones depending on geotechnics and different hydrological contexts varying in time. This communication, realized within the framework of Safeland project, is supported by the European Commission under the 7th Framework Programme for Research and Technological

  12. Flood hazard mapping of Palembang City by using 2D model

    Science.gov (United States)

    Farid, Mohammad; Marlina, Ayu; Kusuma, Muhammad Syahril Badri

    2017-11-01

    Palembang as the capital city of South Sumatera Province is one of the metropolitan cities in Indonesia that flooded almost every year. Flood in the city is highly related to Musi River Basin. Based on Indonesia National Agency of Disaster Management (BNPB), the level of flood hazard is high. Many natural factors caused flood in the city such as high intensity of rainfall, inadequate drainage capacity, and also backwater flow due to spring tide. Furthermore, anthropogenic factors such as population increase, land cover/use change, and garbage problem make flood problem become worse. The objective of this study is to develop flood hazard map of Palembang City by using two dimensional model. HEC-RAS 5.0 is used as modelling tool which is verified with field observation data. There are 21 sub catchments of Musi River Basin in the flood simulation. The level of flood hazard refers to Head Regulation of BNPB number 2 in 2012 regarding general guideline of disaster risk assessment. The result for 25 year return per iod of flood shows that with 112.47 km2 area of inundation, 14 sub catchments are categorized in high hazard level. It is expected that the hazard map can be used for risk assessment.

  13. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  14. The relevance of the school socioeconomic composition and school proportion of repeaters on grade repetition in Brazil: a multilevel logistic model of PISA 2012

    Directory of Open Access Journals (Sweden)

    Maria Eugénia Ferrão

    2017-02-01

    Full Text Available Abstract The paper extends the literature on grade repetition in Brazil by (a describing and synthesizing the main research findings and contributions since 1940, (b enlarging the understanding of the inequity mechanism in education, and (c providing new findings on the effects of the school socioeconomic composition and school proportion of repeaters on the individual probability of grade repetition. Based on the analyses of empirical distributions and multilevel logistic modelling of PISA 2012 data, the findings indicate that higher student socioeconomic status is associated with lower probability of repetition, there is a cumulative risk of repetition after an early repetition, the school socioeconomic composition is strongly correlated with the school proportion of repeaters, and both are related to the individual probability of repetition. The results suggest the existence of a pattern that cumulatively reinforces the effects of social disadvantage, in which the school plays a central role.

  15. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    Science.gov (United States)

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    Science.gov (United States)

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  17. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    Science.gov (United States)

    Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922

  18. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    Directory of Open Access Journals (Sweden)

    Jangwon Suh

    2017-11-01

    Full Text Available In this study, current geographic information system (GIS-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  19. Recent Progress in Understanding Natural-Hazards-Generated TEC Perturbations: Measurements and Modeling Results

    Science.gov (United States)

    Komjathy, A.; Yang, Y. M.; Meng, X.; Verkhoglyadova, O. P.; Mannucci, A. J.; Langley, R. B.

    2015-12-01

    Natural hazards, including earthquakes, volcanic eruptions, and tsunamis, have been significant threats to humans throughout recorded history. The Global Positioning System satellites have become primary sensors to measure signatures associated with such natural hazards. These signatures typically include GPS-derived seismic deformation measurements, co-seismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure and monitor post-seismic ionospheric disturbances caused by earthquakes, volcanic eruptions, and tsunamis. Research at the University of New Brunswick (UNB) laid the foundations to model the three-dimensional ionosphere at NASA's Jet Propulsion Laboratory by ingesting ground- and space-based GPS measurements into the state-of-the-art Global Assimilative Ionosphere Modeling (GAIM) software. As an outcome of the UNB and NASA research, new and innovative GPS applications have been invented including the use of ionospheric measurements to detect tiny fluctuations in the GPS signals between the spacecraft and GPS receivers caused by natural hazards occurring on or near the Earth's surface.We will show examples for early detection of natural hazards generated ionospheric signatures using ground-based and space-borne GPS receivers. We will also discuss recent results from the U.S. Real-time Earthquake Analysis for Disaster Mitigation Network (READI) exercises utilizing our algorithms. By studying the propagation properties of ionospheric perturbations generated by natural hazards along with applying sophisticated first-principles physics-based modeling, we are on track to develop new technologies that can potentially save human lives and minimize property damage. It is also expected that ionospheric monitoring of TEC perturbations might become an integral part of existing natural hazards warning systems.

  20. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  1. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  2. ASCHFLOW - A dynamic landslide run-out model for medium scale hazard analysis

    Czech Academy of Sciences Publication Activity Database

    Quan Luna, B.; Blahůt, Jan; van Asch, T.W.J.; van Westen, C.J.; Kappes, M.

    2016-01-01

    Roč. 3, 12 December (2016), č. článku 29. E-ISSN 2197-8670 Institutional support: RVO:67985891 Keywords : landslides * run-out models * medium scale hazard analysis * quantitative risk assessment Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  3. An advanced model for spreading and evaporation of accidentally released hazardous liquids on land

    NARCIS (Netherlands)

    Trijssenaar-Buhre, I.J.M.; Sterkenburg, R.P.; Wijnant-Timmerman, S.I.

    2009-01-01

    Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a

  4. An advanced model for spreading and evaporation of accidentally released hazardous liquids on land

    NARCIS (Netherlands)

    Trijssenaar-Buhre, I.J.M.; Wijnant-Timmerman, S.L.

    2008-01-01

    Pool evaporation modelling is an important element in consequence assessment of accidentally released hazardous liquids. The evaporation rate determines the amount of toxic or flammable gas released into the atmosphere and is an important factor for the size of a pool fire. In this paper a

  5. Level-Dependent Nonlinear Hearing Protector Model in the Auditory Hazard Assessment Algorithm for Humans

    Science.gov (United States)

    2015-04-01

    HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with

  6. Combining computational models for landslide hazard assessment of Guantánamo province, Cuba

    NARCIS (Netherlands)

    Castellanos Abella, E.A.

    2008-01-01

    As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial

  7. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    Science.gov (United States)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some

  8. Building a risk-targeted regional seismic hazard model for South-East Asia

    Science.gov (United States)

    Woessner, J.; Nyst, M.; Seyhan, E.

    2015-12-01

    The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.

  9. Report 2: Guidance document on practices to model and implement external flooding hazards in extended PSA

    International Nuclear Information System (INIS)

    Rebour, V.; Georgescu, G.; Leteinturier, D.; Raimond, E.; La Rovere, S.; Bernadara, P.; Vasseur, D.; Brinkman, H.; Groudev, P.; Ivanov, I.; Turschmann, M.; Sperbeck, S.; Potempski, S.; Hirata, K.; Kumar, Manorma

    2016-01-01

    This report provides a review of existing practices to model and implement external flooding hazards in existing level 1 PSA. The objective is to identify good practices on the modelling of initiating events (internal and external hazards) with a perspective of development of extended PSA and implementation of external events modelling in extended L1 PSA, its limitations/difficulties as far as possible. The views presented in this report are based on the ASAMPSA-E partners' experience and available publications. The report includes discussions on the following issues: - how to structure a L1 PSA for external flooding events, - information needed from geosciences in terms of hazards modelling and to build relevant modelling for PSA, - how to define and model the impact of each flooding event on SSCs with distinction between the flooding protective structures and devices and the effect of protection failures on other SSCs, - how to identify and model the common cause failures in one reactor or between several reactors, - how to apply HRA methodology for external flooding events, - how to credit additional emergency response (post-Fukushima measures like mobile equipment), - how to address the specific issues of L2 PSA, - how to perform and present risk quantification. (authors)

  10. Tornado hazard model with the variation effects of tornado intensity along the path length

    International Nuclear Information System (INIS)

    Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo

    2015-01-01

    Most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. It is necessary for Japanese electric power companies to assess tornado risks on the plants according to a new regulation in 2013. The new regulatory guide exemplifies a tornado hazard model, which cannot consider the variation of tornado intensity along the path length and consequently produces conservative risk estimates. The guide also recommends the long narrow strip area along the coast line with the width of 5-10 km as a region of interest, although the model tends to estimate inadequate wind speeds due to the limit of application. The purpose of this study is to propose a new tornado hazard model which can be apply to the long narrow strip area. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

  11. Multiwire proportional chamber development

    Science.gov (United States)

    Doolittle, R. F.; Pollvogt, U.; Eskovitz, A. J.

    1973-01-01

    The development of large area multiwire proportional chambers, to be used as high resolution spatial detectors in cosmic ray experiments is described. A readout system was developed which uses a directly coupled, lumped element delay-line whose characteristics are independent of the MWPC design. A complete analysis of the delay-line and the readout electronic system shows that a spatial resolution of about 0.1 mm can be reached with the MWPC operating in the strictly proportional region. This was confirmed by measurements with a small MWPC and Fe-55 X-rays. A simplified analysis was carried out to estimate the theoretical limit of spatial resolution due to delta-rays, spread of the discharge along the anode wire, and inclined trajectories. To calculate the gas gain of MWPC's of different geometrical configurations a method was developed which is based on the knowledge of the first Townsend coefficient of the chamber gas.

  12. Application of decision tree model for the ground subsidence hazard mapping near abandoned underground coal mines.

    Science.gov (United States)

    Lee, Saro; Park, Inhye

    2013-09-30

    Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.

  13. A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem

    Directory of Open Access Journals (Sweden)

    Omid Boyer

    2013-01-01

    Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.

  14. Time-aggregation effects on the baseline of continuous-time and discrete-time hazard models

    NARCIS (Netherlands)

    ter Hofstede, F.; Wedel, M.

    In this study we reinvestigate the effect of time-aggregation for discrete- and continuous-time hazard models. We reanalyze the results of a previous Monte Carlo study by ter Hofstede and Wedel (1998), in which the effects of time-aggregation on the parameter estimates of hazard models were

  15. Impossibility Theorem in Proportional Representation Problem

    International Nuclear Information System (INIS)

    Karpov, Alexander

    2010-01-01

    The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).

  16. Cognitive and Metacognitive Aspects of Proportional Reasoning

    Science.gov (United States)

    Modestou, Modestina; Gagatsis, Athanasios

    2010-01-01

    In this study we attempt to propose a new model of proportional reasoning based both on bibliographical and research data. This is impelled with the help of three written tests involving analogical, proportional, and non-proportional situations that were administered to pupils from grade 7 to 9. The results suggest the existence of a…

  17. Modelos de regresión para variables expresadas como una proporción continua Regression models for variables expressed as a continuous proportion

    Directory of Open Access Journals (Sweden)

    Aarón Salinas-Rodríguez

    2006-10-01

    Full Text Available OBJETIVO: Describir algunas de las alternativas estadísticas disponibles para el estudio de proporciones continuas y comparar los distintos modelos que existen para evidenciar sus ventajas y desventajas, mediante su aplicación a un ejemplo práctico del ámbito de la salud pública. MATERIAL Y MÉTODOS: Con base en la Encuesta Nacional de Salud Reproductiva realizada en el año 2003, se modeló la proporción de cobertura individual en el programa de planificación familiar -propuesta en un estudio previo realizado en el Instituto Nacional de Salud Pública en Cuernavaca, Morelos, México (2005- mediante el uso de los modelos de regresión normal, gama, beta y de quasi-verosimilitud. La variante del criterio de información de Akaike (AIC que propusieron McQuarrie y Tsai se utilizó para definir el mejor modelo. A continuación, y mediante simulación (enfoque Monte Carlo/cadenas de Markov, se generó una variable con distribución beta para evaluar el comportamiento de los cuatro modelos al variar el tamaño de la muestra desde 100 hasta 18 000 observaciones. RESULTADOS: Los resultados muestran que la mejor opción estadística para el análisis de proporciones continuas es el modelo de regresión beta, de acuerdo con sus supuestos y el valor de AIC. La simulación mostró que a medida que aumenta el tamaño de la muestra, el modelo gama y, en especial, el modelo de quasi-verosimilitud se aproximan en grado significativo al modelo beta. CONCLUSIONES: Para la modelación de proporciones continuas se recomienda emplear el enfoque paramétrico de la regresión beta y evitar el uso del modelo normal. Si se tiene un tamaño de muestra grande, el uso del enfoque de quasi-verosimilitud representa una buena alternativa.OBJECTIVE: To describe some of the statistical alternatives available for studying continuous proportions and to compare them in order to show their advantages and disadvantages by means of their application in a practical example of

  18. An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard

    Directory of Open Access Journals (Sweden)

    Xiulan Wang

    2014-01-01

    it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.

  19. Restrictions and Proportionality

    DEFF Research Database (Denmark)

    Werlauff, Erik

    2009-01-01

    The article discusses three central aspects of the freedoms under European Community law, namely 1) the prohibition against restrictions as an important extension of the prohibition against discrimination, 2) a prohibition against exit restrictions which is just as important as the prohibition...... against host country restrictions, but which is often not recognised to the same extent by national law, and 3) the importance of also identifying and recognising an exit restriction, so that it is possible to achieve the required test of appropriateness and proportionality in relation to the rule...

  20. The divine proportion

    CERN Document Server

    Huntley, H E

    1970-01-01

    Using simple mathematical formulas, most as basic as Pythagoras's theorem and requiring only a very limited knowledge of mathematics, Professor Huntley explores the fascinating relationship between geometry and aesthetics. Poetry, patterns like Pascal's triangle, philosophy, psychology, music, and dozens of simple mathematical figures are enlisted to show that the ""divine proportion"" or ""golden ratio"" is a feature of geometry and analysis which awakes answering echoes in the human psyche. When we judge a work of art aesthetically satisfying, according to his formulation, we are making it c

  1. Estimation of foot joint kinetics in three and four segment foot models using an existing proportionality scheme: Application in paediatric barefoot walking.

    Science.gov (United States)

    Deschamps, Kevin; Eerdekens, Maarten; Desmet, Dirk; Matricali, Giovanni Arnoldo; Wuite, Sander; Staes, Filip

    2017-08-16

    Recent studies which estimated foot segment kinetic patterns were found to have inconclusive data on one hand, and did not dissociate the kinetics of the chopart and lisfranc joint. The current study aimed therefore at reproducing independent, recently published three-segment foot kinetic data (Study 1) and in a second stage expand the estimation towards a four-segment model (Study 2). Concerning the reproducibility study, two recently published three segment foot models (Bruening et al., 2014; Saraswat et al., 2014) were reproduced and kinetic parameters were incorporated in order to calculate joint moments and powers of paediatric cohorts during gait. Ground reaction forces were measured with an integrated force/pressure plate measurement set-up and a recently published proportionality scheme was applied to determine subarea total ground reaction forces. Regarding Study 2, moments and powers were estimated with respect to the Instituto Ortopedico Rizzoli four-segment model. The proportionality scheme was expanded in this study and the impact of joint centre location on kinetic data was evaluated. Findings related to Study 1 showed in general good agreement with the kinetic data published by Bruening et al. (2014). Contrarily, the peak ankle, midfoot and hallux powers published by Saraswat et al. (2014) are disputed. Findings of Study 2 revealed that the chopart joint encompasses both power absorption and generation, whereas the Lisfranc joint mainly contributes to power generation. The results highlights the necessity for further studies in the field of foot kinetic models and provides a first estimation of the kinetic behaviour of the Lisfranc joint. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  3. Dynamic modelling of a forward osmosis-nanofiltration integrated process for treating hazardous wastewater.

    Science.gov (United States)

    Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik

    2016-11-01

    Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R 2  > 0.98), low relative error (osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.

  4. Linear non-threshold (LNT) radiation hazards model and its evaluation

    International Nuclear Information System (INIS)

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  5. Conceptual model for millennial climate variability: a possible combined solar-thermohaline circulation origin for the {proportional_to}1,500-year cycle

    Energy Technology Data Exchange (ETDEWEB)

    Dima, Mihai [Alfred Wegener Institute for Polar and Marine Research, Bremerhaven (Germany); University of Bucharest, Department of Atmospheric Physics, Faculty of Physics, P.O. Box 11440, Magurele, Bucharest (Romania); Lohmann, Gerrit [Alfred Wegener Institute for Polar and Marine Research, Bremerhaven (Germany)

    2009-02-15

    Dansgaard-Oeschger and Heinrich events are the most pronounced climatic changes over the last 120,000 years. Although many of their properties were derived from climate reconstructions, the associated physical mechanisms are not yet fully understood. These events are paced by a {proportional_to}1,500-year periodicity whose origin remains unclear. In a conceptual model approach, we show that this millennial variability can originate from rectification of an external (solar) forcing, and suggest that the thermohaline circulation, through a threshold response, could be the rectifier. We argue that internal threshold response of the thermohaline circulation (THC) to solar forcing is more likely to produce the observed DO cycles than amplification of weak direct {proportional_to}1,500-year forcing of unknown origin, by THC. One consequence of our concept is that the millennial variability is viewed as a derived mode without physical processes on its characteristic time scale. Rather, the mode results from the linear representation in the Fourier space of nonlinearly transformed fundamental modes. (orig.)

  6. An animal model to study toxicity of central nervous system therapy for childhood acute lymphoblastic leukemia: Effects on growth and craniofacial proportion

    International Nuclear Information System (INIS)

    Schunior, A.; Zengel, A.E.; Mullenix, P.J.; Tarbell, N.J.; Howes, A.; Tassinari, M.S.

    1990-01-01

    Many long term survivors of childhood acute lymphoblastic leukemia have short stature, as well as craniofacial and dental abnormalities, as side effects of central nervous system prophylactic therapy. An animal model is presented to assess these adverse effects on growth. Cranial irradiation (1000 cGy) with and without prednisolone (18 mg/kg i.p.) and methotrexate (2 mg/kg i.p.) was administered to 17- and 18-day-old Sprague-Dawley male and female rats. Animals were weighed 3 times/week. Final body weight and body length were measured at 150 days of age. Femur length and craniofacial dimensions were measured directly from the bones, using calipers. For all exposed groups there was a permanent suppression of weight gain with no catch-up growth or normal adolescent growth spurt. Body length was reduced for all treated groups, as were the ratios of body weight to body length and cranial length to body length. Animals subjected to cranial irradiation exhibited microcephaly, whereas those who received a combination of radiation and chemotherapy demonstrated altered craniofacial proportions in addition to microcephaly. Changes in growth patterns and skeletal proportions exhibited sexually dimorphic characteristics. The results indicate that cranial irradiation is a major factor in the growth failure in exposed rats, but chemotherapeutic agents contribute significantly to the outcome of growth and craniofacial dimensions

  7. A nuclear proportional counter

    International Nuclear Information System (INIS)

    1973-01-01

    The invention relates to a nuclear proportional counter comprising in a bulb filled with a low-pressure gas, a wire forming an anode and a cathode, characterized in that said cathode is constituted by two plane plates parallel to each other and to the anode wire, and in that two branches of a circuit are connected to the anode wire end-portions, each branch comprising a pre-amplifier, a measuring circuit consisting of a differentiator-integrator-differentiator amplifier and a zero detector, one of the branches comprising an adjustable delay circuit, both branches jointly attacking a conversion circuit for converting the pulse duration into amplitudes said conversion circuit being followed by a multi-channel analyzer, contingently provided with a recorder [fr

  8. Load proportional safety brake

    Science.gov (United States)

    Cacciola, M. J.

    1979-01-01

    This brake is a self-energizing mechanical friction brake and is intended for use in a rotary drive system. It incorporates a torque sensor which cuts power to the power unit on any overload condition. The brake is capable of driving against an opposing load or driving, paying-out, an aiding load in either direction of rotation. The brake also acts as a no-back device when torque is applied to the output shaft. The advantages of using this type of device are: (1) low frictional drag when driving; (2) smooth paying-out of an aiding load with no runaway danger; (3) energy absorption proportional to load; (4) no-back activates within a few degrees of output shaft rotation and resets automatically; and (5) built-in overload protection.

  9. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    Science.gov (United States)

    Rey, Julien; Beauval, Céline; Douglas, John

    2018-02-01

    Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).

  10. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  11. Do French macroseismic intensity observations agree with expectations from the European Seismic Hazard Model 2013?

    Science.gov (United States)

    Rey, Julien; Beauval, Céline; Douglas, John

    2018-05-01

    Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).

  12. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    Science.gov (United States)

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  13. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  14. A new approach to hazardous materials transportation risk analysis: decision modeling to identify critical variables.

    Science.gov (United States)

    Clark, Renee M; Besterfield-Sacre, Mary E

    2009-03-01

    We take a novel approach to analyzing hazardous materials transportation risk in this research. Previous studies analyzed this risk from an operations research (OR) or quantitative risk assessment (QRA) perspective by minimizing or calculating risk along a transport route. Further, even though the majority of incidents occur when containers are unloaded, the research has not focused on transportation-related activities, including container loading and unloading. In this work, we developed a decision model of a hazardous materials release during unloading using actual data and an exploratory data modeling approach. Previous studies have had a theoretical perspective in terms of identifying and advancing the key variables related to this risk, and there has not been a focus on probability and statistics-based approaches for doing this. Our decision model empirically identifies the critical variables using an exploratory methodology for a large, highly categorical database involving latent class analysis (LCA), loglinear modeling, and Bayesian networking. Our model identified the most influential variables and countermeasures for two consequences of a hazmat incident, dollar loss and release quantity, and is one of the first models to do this. The most influential variables were found to be related to the failure of the container. In addition to analyzing hazmat risk, our methodology can be used to develop data-driven models for strategic decision making in other domains involving risk.

  15. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  16. A novel concurrent pictorial choice model of mood-induced relapse in hazardous drinkers.

    Science.gov (United States)

    Hardy, Lorna; Hogarth, Lee

    2017-12-01

    This study tested whether a novel concurrent pictorial choice procedure, inspired by animal self-administration models, is sensitive to the motivational effect of negative mood induction on alcohol-seeking in hazardous drinkers. Forty-eight hazardous drinkers (scoring ≥7 on the Alcohol Use Disorders Inventory) recruited from the community completed measures of alcohol dependence, depression, and drinking coping motives. Baseline alcohol-seeking was measured by percent choice to enlarge alcohol- versus food-related thumbnail images in two alternative forced-choice trials. Negative and positive mood was then induced in succession by means of self-referential affective statements and music, and percent alcohol choice was measured after each induction in the same way as baseline. Baseline alcohol choice correlated with alcohol dependence severity, r = .42, p = .003, drinking coping motives (in two questionnaires, r = .33, p = .02 and r = .46, p = .001), and depression symptoms, r = .31, p = .03. Alcohol choice was increased by negative mood over baseline (p choice was not related to gender, alcohol dependence, drinking to cope, or depression symptoms (ps ≥ .37). The concurrent pictorial choice measure is a sensitive index of the relative value of alcohol, and provides an accessible experimental model to study negative mood-induced relapse mechanisms in hazardous drinkers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Stochastic modeling of a hazard detection and avoidance maneuver—The planetary landing case

    International Nuclear Information System (INIS)

    Witte, Lars

    2013-01-01

    Hazard Detection and Avoidance (HDA) functionalities, thus the ability to recognize and avoid potential hazardous terrain features, is regarded as an enabling technology for upcoming robotic planetary landing missions. In the forefront of any landing mission the landing site safety assessment is an important task in the systems and mission engineering process. To contribute to this task, this paper presents a mathematical framework to consider the HDA strategy and system constraints in this mission engineering aspect. Therefore the HDA maneuver is modeled as a stochastic decision process based on Markov chains to map an initial dispersion at an arrival gate to a new dispersion pattern affected by the divert decision-making and system constraints. The implications for an efficient numerical implementation are addressed. An example case study is given to demonstrate the implementation and use of the proposed scheme

  18. Recent developments in health risks modeling techniques applied to hazardous waste site assessment and remediation

    International Nuclear Information System (INIS)

    Mendez, W.M. Jr.

    1990-01-01

    Remediation of hazardous an mixed waste sites is often driven by assessments of human health risks posed by the exposures to hazardous substances released from these sites. The methods used to assess potential health risk involve, either implicitly or explicitly, models for pollutant releases, transport, human exposure and intake, and for characterizing health effects. Because knowledge about pollutant fate transport processes at most waste sites is quite limited, and data cost are quite high, most of the models currently used to assess risk, and endorsed by regulatory agencies, are quite simple. The models employ many simplifying assumptions about pollutant fate and distribution in the environment about human pollutant intake, and toxicologic responses to pollutant exposures. An important consequence of data scarcity and model simplification is that risk estimates are quite uncertain and estimates of the magnitude uncertainty associated with risk assessment has been very difficult. A number of methods have been developed to address the issue of uncertainty in risk assessments in a manner that realistically reflects uncertainty in model specification and data limitations. These methods include definition of multiple exposure scenarios, sensitivity analyses, and explicit probabilistic modeling of uncertainty. Recent developments in this area will be discussed, along with their possible impacts on remediation programs, and remaining obstacles to their wider use and acceptance by the scientific and regulatory communities

  19. Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California

    Science.gov (United States)

    Pike, Richard J.; Graymer, Russell W.

    2008-01-01

    With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven

  20. Flood Hazard Mapping by Using Geographic Information System and Hydraulic Model: Mert River, Samsun, Turkey

    Directory of Open Access Journals (Sweden)

    Vahdettin Demir

    2016-01-01

    Full Text Available In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS. In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1 digitization of topographical data and preparation of digital elevation model using ArcGIS, (2 simulation of flood lows of different return periods using a hydraulic model (HEC-RAS, and (3 preparation of flood risk maps by integrating the results of (1 and (2.

  1. Development of hydrogeological modelling approaches for assessment of consequences of hazardous accidents at nuclear power plants

    International Nuclear Information System (INIS)

    Rumynin, V.G.; Mironenko, V.A.; Konosavsky, P.K.; Pereverzeva, S.A.

    1994-07-01

    This paper introduces some modeling approaches for predicting the influence of hazardous accidents at nuclear reactors on groundwater quality. Possible pathways for radioactive releases from nuclear power plants were considered to conceptualize boundary conditions for solving the subsurface radionuclides transport problems. Some approaches to incorporate physical-and-chemical interactions into transport simulators have been developed. The hydrogeological forecasts were based on numerical and semi-analytical scale-dependent models. They have been applied to assess the possible impact of the nuclear power plants designed in Russia on groundwater reservoirs

  2. Movement Performance of Human-Robot Cooperation Control Based on EMG-Driven Hill-Type and Proportional Models for an Ankle Power-Assist Exoskeleton Robot.

    Science.gov (United States)

    Ao, Di; Song, Rong; Gao, JinWu

    2017-08-01

    Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.

  3. Use of agent-based modelling in emergency management under a range of flood hazards

    Directory of Open Access Journals (Sweden)

    Tagg Andrew

    2016-01-01

    Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.

  4. A transparent and data-driven global tectonic regionalization model for seismic hazard assessment

    Science.gov (United States)

    Chen, Yen-Shin; Weatherill, Graeme; Pagani, Marco; Cotton, Fabrice

    2018-05-01

    A key concept that is common to many assumptions inherent within seismic hazard assessment is that of tectonic similarity. This recognizes that certain regions of the globe may display similar geophysical characteristics, such as in the attenuation of seismic waves, the magnitude scaling properties of seismogenic sources or the seismic coupling of the lithosphere. Previous attempts at tectonic regionalization, particularly within a seismic hazard assessment context, have often been based on expert judgements; in most of these cases, the process for delineating tectonic regions is neither reproducible nor consistent from location to location. In this work, the regionalization process is implemented in a scheme that is reproducible, comprehensible from a geophysical rationale, and revisable when new relevant data are published. A spatial classification-scheme is developed based on fuzzy logic, enabling the quantification of concepts that are approximate rather than precise. Using the proposed methodology, we obtain a transparent and data-driven global tectonic regionalization model for seismic hazard applications as well as the subjective probabilities (e.g. degree of being active/degree of being cratonic) that indicate the degree to which a site belongs in a tectonic category.

  5. The 2018 and 2020 Updates of the U.S. National Seismic Hazard Models

    Science.gov (United States)

    Petersen, M. D.

    2017-12-01

    During 2018 the USGS will update the 2014 National Seismic Hazard Models by incorporating new seismicity models, ground motion models, site factors, fault inputs, and by improving weights to ground motion models using empirical and other data. We will update the earthquake catalog for the U.S. and introduce new rate models. Additional fault data will be used to improve rate estimates on active faults. New ground motion models (GMMs) and site factors for Vs30 have been released by the Pacific Earthquake Engineering Research Center (PEER) and we will consider these in assessing ground motions in craton and extended margin regions of the central and eastern U.S. The USGS will also include basin-depth terms for selected urban areas of the western United States to improve long-period shaking assessments using published depth estimates to 1.0 and 2.5 km/s shear wave velocities. We will produce hazard maps for input into the building codes that span a broad range of periods (0.1 to 5 s) and site classes (shear wave velocity from 2000 m/s to 200 m/s in the upper 30 m of the crust, Vs30). In the 2020 update we plan on including: a new national crustal model that defines basin depths required in the latest GMMs, new 3-D ground motion simulations for several urban areas, new magnitude-area equations, and new fault geodetic and geologic strain rate models. The USGS will also consider including new 3-D ground motion simulations for inclusion in these long-period maps. These new models are being evaluated and will be discussed at one or more regional and topical workshops held at the beginning of 2018.

  6. Evaluation of MEDALUS model for desertification hazard zonation using GIS; study area: Iyzad Khast plain, Iran.

    Science.gov (United States)

    Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik

    2007-08-15

    In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.

  7. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    Science.gov (United States)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an

  8. Tsunami-hazard assessment based on subaquatic slope-failure susceptibility and tsunami-inundation modeling

    Science.gov (United States)

    Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael

    2015-04-01

    Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of

  9. Estimation of direct effects for survival data by using the Aalen additive hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Gerster, Mette

    2011-01-01

    We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...... Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first...

  10. Assessing aquifer vulnerability from lumped parameter modeling of modern water proportions in groundwater mixtures - Application to nitrate pollution in California's South Coast Range

    Science.gov (United States)

    Hagedorn, B.; Ruane, M.; Clark, N.

    2017-12-01

    In California, the overuse of synthetic fertilizers and manure in agriculture have caused nitrate (NO3) to be one of the state's most widespread groundwater pollutants. Given that nitrogen fertilizer applications have steadily increased since the 1950s and given that soil percolation and recharge transit times in California can exceed timescales of decades, the nitrate impact on groundwater resources is likely a legacy for years and even decades to come. This study presents a methodology for groundwater vulnerability assessment that operates independently of difficult-to-constrain soil and aquifer property data (i.e., saturated thickness, texture, porosity, conductivity, etc.), but rather utilizes groundwater age and, more importantly, groundwater mixing information to illustrate actual vulnerability at the water table. To accomplish this, the modern (i.e., less than 60-year old) water proportion (MWP) in groundwater mixtures is computed via lumped parameter modeling of chemical tracer (i.e., 3H, 14C and 3Hetrit) data. These MWPs are then linked to groundwater dissolved oxygen (DO) values to describe the risk for soil zone-derived nitrate to accumulate in the saturated zone. Preliminary studies carried out for 71 wells in California's South Coast Range-Coastal (SCRC) study unit reveal MWP values derived from binary dispersion models of 3.24% to 21.8%. The fact that high MWPs generally coincide with oxic (DO ≥1.5 mg/L) groundwater conditions underscores the risk towards increased groundwater NO3 pollution for many of the tested wells. These results support the conclusion that best agricultural management and policy objectives should incorporate groundwater vulnerability models that are developed at the same spatial scale as the decision making.

  11. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  12. Measurements and models for hazardous chemical and mixed wastes. 1998 annual progress report

    International Nuclear Information System (INIS)

    Holcomb, C.; Louie, B.; Mullins, M.E.; Outcalt, S.L.; Rogers, T.N.; Watts, L.

    1998-01-01

    'Aqueous waste of various chemical compositions constitutes a significant fraction of the total waste produced by industry in the US. A large quantity of the waste generated by the US chemical process industry is waste water. In addition, the majority of the waste inventory at DoE sites previously used for nuclear weapons production is aqueous waste. Large quantities of additional aqueous waste are expected to be generated during the clean-up of those sites. In order to effectively treat, safely handle, and properly dispose of these wastes, accurate and comprehensive knowledge of basic thermophysical property information is paramount. This knowledge will lead to huge savings by aiding in the design and optimization of treatment and disposal processes. The main objectives of this project are: Develop and validate models that accurately predict the phase equilibria and thermodynamic properties of hazardous aqueous systems necessary for the safe handling and successful design of separation and treatment processes for hazardous chemical and mixed wastes. Accurately measure the phase equilibria and thermodynamic properties of a representative system (water + acetone + isopropyl alcohol + sodium nitrate) over the applicable ranges of temperature, pressure, and composition to provide the pure component, binary, ternary, and quaternary experimental data required for model development. As of May, 1998, nine months into the first year of a three year project, the authors have made significant progress in the database development, have begun testing the models, and have been performance testing the apparatus on the pure components.'

  13. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  14. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  15. The Role of Sister Cities' Staff Exchanges in Developing "Learning Cities": Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling.

    Science.gov (United States)

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-06-24

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building "learning cities" through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met.

  16. The Role of Sister Cities’ Staff Exchanges in Developing “Learning Cities”: Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling

    Directory of Open Access Journals (Sweden)

    Patrick Henry Buckley

    2015-06-01

    Full Text Available In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met.

  17. The Role of Sister Cities’ Staff Exchanges in Developing “Learning Cities”: Exploring Necessary and Sufficient Conditions in Social Capital Development Utilizing Proportional Odds Modeling

    Science.gov (United States)

    Buckley, Patrick Henry; Takahashi, Akio; Anderson, Amy

    2015-01-01

    In the last half century former international adversaries have become cooperators through networking and knowledge sharing for decision making aimed at improving quality of life and sustainability; nowhere has this been more striking then at the urban level where such activity is seen as a key component in building “learning cities” through the development of social capital. Although mega-cities have been leaders in such efforts, mid-sized cities with lesser resource endowments have striven to follow by focusing on more frugal sister city type exchanges. The underlying thesis of our research is that great value can be derived from city-to-city exchanges through social capital development. However, such a study must differentiate between necessary and sufficient conditions. Past studies assumed necessary conditions were met and immediately jumped to demonstrating the existence of structural relationships by measuring networking while further assuming that the existence of such demonstrated a parallel development of cognitive social capital. Our research addresses this lacuna by stepping back and critically examining these assumptions. To accomplish this goal we use a Proportional Odds Modeling with a Cumulative Logit Link approach to demonstrate the existence of a common latent structure, hence asserting that necessary conditions are met. PMID:26114245

  18. CalTOX, a multimedia total exposure model for hazardous-waste sites

    International Nuclear Information System (INIS)

    McKone, T.E.

    1993-06-01

    CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population

  19. Regression analysis of informative current status data with the additive hazards model.

    Science.gov (United States)

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  20. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    Science.gov (United States)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  1. Relating arithmetical techniques of proportion to geometry

    DEFF Research Database (Denmark)

    Wijayanti, Dyana

    2015-01-01

    The purpose of this study is to investigate how textbooks introduce and treat the theme of proportion in geometry (similarity) and arithmetic (ratio and proportion), and how these themes are linked to each other in the books. To pursue this aim, we use the anthropological theory of the didactic....... Considering 6 common Indonesian textbooks in use, we describe how proportion is explained and appears in examples and exercises, using an explicit reference model of the mathematical organizations of both themes. We also identify how the proportion themes of the geometry and arithmetic domains are linked. Our...

  2. Confidence intervals for the first crossing point of two hazard functions.

    Science.gov (United States)

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  3. Constant Proportion Debt Obligations (CPDOs)

    DEFF Research Database (Denmark)

    Cont, Rama; Jessen, Cathrine

    2012-01-01

    be made arbitrarily small—and thus the credit rating arbitrarily high—by increasing leverage, but the ratings obtained strongly depend on assumptions on the credit environment (high spread or low spread). More importantly, CPDO loss distributions are found to exhibit a wide range of tail risk measures......Constant Proportion Debt Obligations (CPDOs) are structured credit derivatives that generate high coupon payments by dynamically leveraging a position in an underlying portfolio of investment-grade index default swaps. CPDO coupons and principal notes received high initial credit ratings from...... the major rating agencies, based on complex models for the joint transition of ratings and spreads for all names in the underlying portfolio. We propose a parsimonious model for analysing the performance of CPDO strategies using a top-down approach that captures the essential risk factors of the CPDO. Our...

  4. A "mental models" approach to the communication of subsurface hydrology and hazards

    Science.gov (United States)

    Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison

    2016-05-01

    Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.

  5. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    Science.gov (United States)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  6. Socio-economic vulnerability to natural hazards - proposal for an indicator-based model

    Science.gov (United States)

    Eidsvig, U.; McLean, A.; Vangelsten, B. V.; Kalsnes, B.; Ciurean, R. L.; Argyroudis, S.; Winter, M.; Corominas, J.; Mavrouli, O. C.; Fotopoulou, S.; Pitilakis, K.; Baills, A.; Malet, J. P.

    2012-04-01

    Vulnerability assessment, with respect to natural hazards, is a complex process that must consider multiple dimensions of vulnerability, including both physical and social factors. Physical vulnerability refers to conditions of physical assets, and may be modeled by the intensity and magnitude of the hazard, the degree of physical protection provided by the natural and built environment, and the physical robustness of the exposed elements. Social vulnerability refers to the underlying factors leading to the inability of people, organizations, and societies to withstand impacts from the natural hazards. Social vulnerability models can be used in combination with physical vulnerability models to estimate both direct losses, i.e. losses that occur during and immediately after the impact, as well as indirect losses, i.e. long-term effects of the event. Direct impact of a landslide typically includes casualties and damages to buildings and infrastructure while indirect losses may e.g. include business closures or limitations in public services. The direct losses are often assessed using physical vulnerability indicators (e.g. construction material, height of buildings), while indirect losses are mainly assessed using social indicators (e.g. economical resources, demographic conditions). Within the EC-FP7 SafeLand research project, an indicator-based method was proposed to assess relative socio-economic vulnerability to landslides. The indicators represent the underlying factors which influence a community's ability to prepare for, deal with, and recover from the damage associated with landslides. The proposed model includes indicators representing demographic, economic and social characteristics as well as indicators representing the degree of preparedness and recovery capacity. Although the model focuses primarily on the indirect losses, it could easily be extended to include more physical indicators which account for the direct losses. Each indicator is individually

  7. High resolution global flood hazard map from physically-based hydrologic and hydraulic models.

    Science.gov (United States)

    Begnudelli, L.; Kaheil, Y.; McCollum, J.

    2017-12-01

    The global flood map published online at http://www.fmglobal.com/research-and-resources/global-flood-map at 90m resolution is being used worldwide to understand flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs. The modeling system is based on a physically-based hydrologic model to simulate river discharges, and 2D shallow-water hydrodynamic model to simulate inundation. The model can be applied to large-scale flood hazard mapping thanks to several solutions that maximize its efficiency and the use of parallel computing. The hydrologic component of the modeling system is the Hillslope River Routing (HRR) hydrologic model. HRR simulates hydrological processes using a Green-Ampt parameterization, and is calibrated against observed discharge data from several publicly-available datasets. For inundation mapping, we use a 2D Finite-Volume Shallow-Water model with wetting/drying. We introduce here a grid Up-Scaling Technique (UST) for hydraulic modeling to perform simulations at higher resolution at global scale with relatively short computational times. A 30m SRTM is now available worldwide along with higher accuracy and/or resolution local Digital Elevation Models (DEMs) in many countries and regions. UST consists of aggregating computational cells, thus forming a coarser grid, while retaining the topographic information from the original full-resolution mesh. The full-resolution topography is used for building relationships between volume and free surface elevation inside cells and computing inter-cell fluxes. This approach almost achieves computational speed typical of the coarse grids while preserving, to a significant extent, the accuracy offered by the much higher resolution available DEM. The simulations are carried out along each river of the network by forcing the hydraulic model with the streamflow hydrographs generated by HRR. Hydrographs are scaled so that the peak

  8. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  9. FLOOD HAZARD MAP IN THE CITY OF BATNA (ALGERIA BY HYDRAULIC MODELING APPROCH

    Directory of Open Access Journals (Sweden)

    Guellouh SAMI

    2016-06-01

    Full Text Available In the light of the global climatic changes that appear to influence the frequency and the intensity of floods, and whose damages are still growing; understanding the hydrological processes, their spatiotemporal setting and their extreme shape, became a paramount concern to local communities in forecasting terms. The aim of this study is to map the floods hazard using a hydraulic modeling method. In fact, using the operating Geographic Information System (GIS, would allow us to perform a more detailed spatial analysis about the extent of the flooding risk, through the approval of the hydraulic modeling programs in different frequencies. Based on the results of this analysis, decision makers can implement a strategy of risk management related to rivers overflowing through the city of Batna.

  10. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2006-04-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  11. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2007-08-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  12. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models.

    Science.gov (United States)

    Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.

    2013-11-01

    El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results

  13. Issues in testing the new national seismic hazard model for Italy

    Science.gov (United States)

    Stein, S.; Peresan, A.; Kossobokov, V. G.; Brooks, E. M.; Spencer, B. D.

    2016-12-01

    It is important to bear in mind that we know little about how earthquake hazard maps actually describe the shaking that will actually occur in the future, and have no agreed way of assessing how well a map performed in the past, and, thus, whether one map performs better than another. Moreover, we should not forget that different maps can be useful for different end users, who may have different cost-and-benefit strategies. Thus, regardless of the specific tests we chose to use, the adopted testing approach should have several key features: We should assess map performance using all the available instrumental, paleo seismology, and historical intensity data. Instrumental data alone span a period much too short to capture the largest earthquakes - and thus strongest shaking - expected from most faults. We should investigate what causes systematic misfit, if any, between the longest record we have - historical intensity data available for the Italian territory from 217 B.C. to 2002 A.D. - and a given hazard map. We should compare how seismic hazard maps developed over time. How do the most recent maps for Italy compare to earlier ones? It is important to understand local divergences that show how the models are developing to the most recent one. The temporal succession of maps is important: we have to learn from previous errors. We should use the many different tests that have been proposed. All are worth trying, because different metrics of performance show different aspects of how a hazard map performs and can be used. We should compare other maps to the ones we are testing. Maps can be made using a wide variety of assumptions, which will lead to different predicted shaking. It is possible that maps derived by other approaches may perform better. Although Italian current codes are based on probabilistic maps, it is important from both a scientific and societal perspective to look at all options including deterministic scenario based ones. Comparing what works

  14. Application of statistical and dynamics models for snow avalanche hazard assessment in mountain regions of Russia

    Science.gov (United States)

    Turchaninova, A.

    2012-04-01

    The estimation of extreme avalanche runout distances, flow velocities, impact pressures and volumes is an essential part of snow engineering in mountain regions of Russia. It implies the avalanche hazard assessment and mapping. Russian guidelines accept the application of different avalanche models as well as approaches for the estimation of model input parameters. Consequently different teams of engineers in Russia apply various dynamics and statistical models for engineering practice. However it gives more freedom to avalanche practitioners and experts but causes lots of uncertainties in case of serious limitations of avalanche models. We discuss these problems by presenting the application results of different well known and widely used statistical (developed in Russia) and avalanche dynamics models for several avalanche test sites in the Khibini Mountains (The Kola Peninsula) and the Caucasus. The most accurate and well-documented data from different powder and wet, big rare and small frequent snow avalanche events is collected from 1960th till today in the Khibini Mountains by the Avalanche Safety Center of "Apatit". This data was digitized and is available for use and analysis. Then the detailed digital avalanche database (GIS) was created for the first time. It contains contours of observed avalanches (ESRI shapes, more than 50 years of observations), DEMs, remote sensing data, description of snow pits, photos etc. Thus, the Russian avalanche data is a unique source of information for understanding of an avalanche flow rheology and the future development and calibration of the avalanche dynamics models. GIS database was used to analyze model input parameters and to calibrate and verify avalanche models. Regarding extreme dynamic parameters the outputs using different models can differ significantly. This is unacceptable for the engineering purposes in case of the absence of the well-defined guidelines in Russia. The frequency curves for the runout distance

  15. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    International Nuclear Information System (INIS)

    Li, Lu; Huang, Xianjia; Bi, Kun; Liu, Xiaoshuang

    2016-01-01

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  16. An enhanced fire hazard assessment model and validation experiments for vertical cable trays

    Energy Technology Data Exchange (ETDEWEB)

    Li, Lu [Sate Key Laboratory of Fire Science, University of Science and Technology of China, Hefei 230027 (China); Huang, Xianjia, E-mail: huangxianjia@gziit.ac.cn [Joint Laboratory of Fire Safety in Nuclear Power Plants, Institute of Industry Technology Guangzhou & Chinese Academy of Sciences, Guangzhou 511458 (China); Bi, Kun; Liu, Xiaoshuang [China Nuclear Power Design Co., Ltd., Shenzhen 518045 (China)

    2016-05-15

    Highlights: • An enhanced model was developed for vertical cable fire hazard assessment in NPP. • The validated experiments on vertical cable tray fires were conducted. • The capability of the model for cable tray with different cable spacing were tested. - Abstract: The model, referred to as FLASH-CAT (Flame Spread over Horizontal Cable Trays), was developed to estimate the heat release rate for vertical cable tray fire. The focus of this work is to investigate the application of an enhanced model to the single vertical cable tray fires with different cable spacing. The experiments on vertical cable tray fires with three typical cable spacing were conducted. The histories of mass loss rate and flame length were recorded during the cable fire. From the experimental results, it is found that the space between cable lines intensifies the cable combustion and accelerates the flame spread. The predictions by the enhanced model show good agreements with the experimental data. At the same time, it is shown that the enhanced model is capable of predicting the different behaviors of cable fires with different cable spacing by adjusting the flame spread speed only.

  17. Integrating statistical and process-based models to produce probabilistic landslide hazard at regional scale

    Science.gov (United States)

    Strauch, R. L.; Istanbulluoglu, E.

    2017-12-01

    We develop a landslide hazard modeling approach that integrates a data-driven statistical model and a probabilistic process-based shallow landslide model for mapping probability of landslide initiation, transport, and deposition at regional scales. The empirical model integrates the influence of seven site attribute (SA) classes: elevation, slope, curvature, aspect, land use-land cover, lithology, and topographic wetness index, on over 1,600 observed landslides using a frequency ratio (FR) approach. A susceptibility index is calculated by adding FRs for each SA on a grid-cell basis. Using landslide observations we relate susceptibility index to an empirically-derived probability of landslide impact. This probability is combined with results from a physically-based model to produce an integrated probabilistic map. Slope was key in landslide initiation while deposition was linked to lithology and elevation. Vegetation transition from forest to alpine vegetation and barren land cover with lower root cohesion leads to higher frequency of initiation. Aspect effects are likely linked to differences in root cohesion and moisture controlled by solar insulation and snow. We demonstrate the model in the North Cascades of Washington, USA and identify locations of high and low probability of landslide impacts that can be used by land managers in their design, planning, and maintenance.

  18. Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Coppersmith, K.J.; Youngs, R.R.

    1992-01-01

    In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting

  19. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  20. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  1. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    Science.gov (United States)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  2. Challenges in understanding, modelling, and mitigating Lake Outburst Flood Hazard: experiences from Central Asia

    Science.gov (United States)

    Mergili, Martin; Schneider, Demian; Andres, Norina; Worni, Raphael; Gruber, Fabian; Schneider, Jean F.

    2010-05-01

    Lake Outburst Floods can evolve from complex process chains like avalanches of rock or ice that produce flood waves in a lake which may overtop and eventually breach glacial, morainic, landslide, or artificial dams. Rising lake levels can lead to progressive incision and destabilization of a dam, to enhanced ground water flow (piping), or even to hydrostatic failure of ice dams which can cause sudden outflow of accumulated water. These events often have a highly destructive potential because a large amount of water is released in a short time, with a high capacity to erode loose debris, leading to a powerful debris flow with a long travel distance. The best-known example of a lake outburst flood is the Vajont event (Northern Italy, 1963), where a landslide rushed into an artificial lake which spilled over and caused a flood leading to almost 2000 fatalities. Hazards from the failure of landslide dams are often (not always) fairly manageable: most breaches occur in the first few days or weeks after the landslide event and the rapid construction of a spillway - though problematic - has solved some hazardous situations (e.g. in the case of Hattian landslide in 2005 in Pakistan). Older dams, like Usoi dam (Lake Sarez) in Tajikistan, are usually fairly stable, though landsildes into the lakes may create floodwaves overtopping and eventually weakening the dams. The analysis and the mitigation of glacial lake outburst flood (GLOF) hazard remains a challenge. A number of GLOFs resulting in fatalities and severe damage have occurred during the previous decades, particularly in the Himalayas and in the mountains of Central Asia (Pamir, Tien Shan). The source area is usually far away from the area of impact and events occur at very long intervals or as singularities, so that the population at risk is usually not prepared. Even though potentially hazardous lakes can be identified relatively easily with remote sensing and field work, modeling and predicting of GLOFs (and also

  3. The Hazard Analysis and Critical Control Points (HACCP) generic model for the production of Thai fermented pork sausage (Nham).

    Science.gov (United States)

    Paukatong, K V; Kunawasen, S

    2001-01-01

    Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.

  4. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    Science.gov (United States)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  5. Use of raster-based data layers to model spatial variation of seismotectonic data in probabilistic seismic hazard assessment

    Science.gov (United States)

    Zolfaghari, Mohammad R.

    2009-07-01

    Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.

  6. River Loire levees hazard studies – CARDigues’ model principles and utilization examples on Blois levees

    Directory of Open Access Journals (Sweden)

    Durand Eduard

    2016-01-01

    Full Text Available Along the river Loire, in order to have a homogenous method to do specific risk assessment studies, a new model named CARDigues (for Levee Breach Hazard Calculation was developed in a partnership with DREAL Centre-Val de Loire (owner of levees, Cerema and Irstea. This model enables to approach the probability of failure on every levee sections and to integrate and cross different “stability” parameters such topography and included structures, geology and material geotechnical characteristics, hydraulic loads… and observations of visual inspections or instrumentation results considered as disorders (seepage, burrowing animals, vegetation, pipes, etc.. This model and integrated tool CARDigues enables to check for each levee section, the probability of appearance and rupture of five breaching scenarios initiated by: overflowing, internal erosion, slope instability, external erosion and uplift. It has been recently updated and has been applied on several levee systems by different contractors. The article presents the CARDigues model principles and its recent developments (version V28.00 with examples on river Loire and how it is currently used for a relevant and global levee system diagnosis and assessment. Levee reinforcement or improvement management is also a perspective of applications for this model CARDigues.

  7. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    Science.gov (United States)

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  8. Human hazards

    International Nuclear Information System (INIS)

    Delpla, M.; Vignes, S.; Wolber, G.

    1976-01-01

    Among health hazards from ionizing radiations, a distinction is made of observed, likely and theoretical risks. Theoretical risks, derived from extrapolation of observations on sublethal exposures to low doses may frighten. However, they have nothing in common with reality as shown for instance, by the study of carcinogenesis risks at Nagasaki. By extrapolation to low doses, theoretical mutation risks are derived by geneticians from the observation of some characters especially deleterious in the progeny of parents exposed to sublethal doses. One cannot agree when by calculation they express a population exposure by a shift of its genetic balance with an increase of the proportion of disabled individuals. As a matter of fact, experimental exposure of successive generations of laboratory animals shows no accumulation of deleterious genes, sublethal doses excepted. Large nuclear plants should not be overwhelmed by horrible charges on sanitary grounds, whereas small sources have but too often shown they may originate mortal risks [fr

  9. Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera

    CERN Document Server

    Bevilacqua, Andrea

    2016-01-01

    This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.

  10. Risk assessment framework of fate and transport models applied to hazardous waste sites

    International Nuclear Information System (INIS)

    Hwang, S.T.

    1993-06-01

    Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary

  11. A set of integrated environmental transport and diffusion models for calculating hazardous releases

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1996-01-01

    A set of numerical transport and dispersion models is incorporated within a graphical interface shell to predict hazardous material released into the environment. The visual shell (EnviroView) consists of an object-oriented knowledge base, which is used for inventory control, site mapping and orientation, and monitoring of materials. Graphical displays of detailed sites, building locations, floor plans, and three-dimensional views within a room are available to the user using a point and click interface. In the event of a release to the environment, the user can choose from a selection of analytical, finite element, finite volume, and boundary element methods, which calculate atmospheric transport, groundwater transport, and dispersion within a building interior. The program runs on 486 personal computers under WINDOWS

  12. Coupling Radar Rainfall Estimation and Hydrological Modelling For Flash-flood Hazard Mitigation

    Science.gov (United States)

    Borga, M.; Creutin, J. D.

    Flood risk mitigation is accomplished through managing either or both the hazard and vulnerability. Flood hazard may be reduced through structural measures which alter the frequency of flood levels in the area. The vulnerability of a community to flood loss can be mitigated through changing or regulating land use and through flood warning and effective emergency response. When dealing with flash-flood hazard, it is gener- ally accepted that the most effective way (and in many instances the only affordable in a sustainable perspective) to mitigate the risk is by reducing the vulnerability of the involved communities, in particular by implementing flood warning systems and community self-help programs. However, both the inherent characteristics of the at- mospheric and hydrologic processes involved in flash-flooding and the changing soci- etal needs provide a tremendous challenge to traditional flood forecasting and warning concepts. In fact, the targets of these systems are traditionally localised like urbanised sectors or hydraulic structures. Given the small spatial scale that characterises flash floods and the development of dispersed urbanisation, transportation, green tourism and water sports, human lives and property are exposed to flash flood risk in a scat- tered manner. This must be taken into consideration in flash flood warning strategies and the investigated region should be considered as a whole and every section of the drainage network as a potential target for hydrological warnings. Radar technology offers the potential to provide information describing rain intensities almost contin- uously in time and space. Recent research results indicate that coupling radar infor- mation to distributed hydrologic modelling can provide hydrologic forecasts at all potentially flooded points of a region. Nevertheless, very few flood warning services use radar data more than on a qualitative basis. After a short review of current under- standing in this area, two

  13. A comparative analysis of hazard models for predicting debris flows in Madison County, VA

    Science.gov (United States)

    Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.

    2001-01-01

    During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).

  14. A semiparametric hazard model of activity timing and sequencing decisions during visits to theme parks using experimental design data

    NARCIS (Netherlands)

    Kemperman, A.D.A.M.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    In this study we introduce a semi parametric hazard-based duration model to predict the timing and sequence of theme park visitors' activity choice behavior. The model is estimated on the basis of observations of consumer choices in various hypothetical theme parks. These parks are constructed by

  15. Introducing Meta-models for a More Efficient Hazard Mitigation Strategy with Rockfall Protection Barriers

    Science.gov (United States)

    Toe, David; Mentani, Alessio; Govoni, Laura; Bourrier, Franck; Gottardi, Guido; Lambert, Stéphane

    2018-04-01

    The paper presents a new approach to assess the effecctiveness of rockfall protection barriers, accounting for the wide variety of impact conditions observed on natural sites. This approach makes use of meta-models, considering a widely used rockfall barrier type and was developed from on FE simulation results. Six input parameters relevant to the block impact conditions have been considered. Two meta-models were developed concerning the barrier capability either of stopping the block or in reducing its kinetic energy. The outcome of the parameters range on the meta-model accuracy has been also investigated. The results of the study reveal that the meta-models are effective in reproducing with accuracy the response of the barrier to any impact conditions, providing a formidable tool to support the design of these structures. Furthermore, allowing to accommodate the effects of the impact conditions on the prediction of the block-barrier interaction, the approach can be successfully used in combination with rockfall trajectory simulation tools to improve rockfall quantitative hazard assessment and optimise rockfall mitigation strategies.

  16. General methods for analyzing bounded proportion data

    OpenAIRE

    Hossain, Abu

    2017-01-01

    This thesis introduces two general classes of models for analyzing proportion response variable when the response variable Y can take values between zero and one, inclusive of zero and/or one. The models are inflated GAMLSS model and generalized Tobit GAMLSS model. The inflated GAMLSS model extends the flexibility of beta inflated models by allowing the distribution on (0,1) of the continuous component of the dependent variable to come from any explicit or transformed (i.e. logit or truncated...

  17. Three-dimensional displays for natural hazards analysis, using classified Landsat Thematic Mapper digital data and large-scale digital elevation models

    Science.gov (United States)

    Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.

    1991-01-01

    Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.

  18. Development of a Probabilistic Tornado Wind Hazard Model for the Continental United States Volume I: Main Report

    International Nuclear Information System (INIS)

    Boissonnade, A; Hossain, Q; Kimball, J

    2000-01-01

    Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526, in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States

  19. [Application of occupational hazard risk index model in occupational health risk assessment in a decorative coating manufacturing enterprises].

    Science.gov (United States)

    He, P L; Zhao, C X; Dong, Q Y; Hao, S B; Xu, P; Zhang, J; Li, J G

    2018-01-20

    Objective: To evaluate the occupational health risk of decorative coating manufacturing enterprises and to explore the applicability of occupational hazard risk index model in the health risk assessment, so as to provide basis for the health management of enterprises. Methods: A decorative coating manufacturing enterprise in Hebei Province was chosen as research object, following the types of occupational hazards and contact patterns, the occupational hazard risk index model was used to evaluate occupational health risk factors of occupational hazards in the key positions of the decorative coating manufacturing enterprise, and measured with workplace test results and occupational health examination. Results: The positions of oily painters, water-borne painters, filling workers and packers who contacted noise were moderate harm. And positions of color workers who contacted chromic acid salts, oily painters who contacted butyl acetate were mild harm. Other positions were harmless. The abnormal rate of contacting noise in physical examination results was 6.25%, and the abnormality was not checked by other risk factors. Conclusion: The occupational hazard risk index model can be used in the occupational health risk assessment of decorative coating manufacturing enterprises, and noise was the key harzard among occupational harzards in this enterprise.

  20. GIS and RS-based modelling of potential natural hazard areas in Pehchevo municipality, Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Milevski Ivica

    2013-01-01

    Full Text Available In this paper, one approach of Geographic Information System (GIS and Remote Sensing (RS assessment of potential natural hazard areas (excess erosion, landslides, flash floods and fires is presented. For that purpose Pehchevo Municipality in the easternmost part of the Republic of Macedonia is selected as a case study area because of high local impact of natural hazards on the environment, social-demographic situation and local economy. First of all, most relevant static factors for each type of natural hazard are selected (topography, land cover, anthropogenic objects and infrastructure. With GIS and satellite imagery, multi-layer calculation is performed based on available traditional equations, clustering or discreditation procedures. In such way suitable relatively “static” natural hazard maps (models are produced. Then, dynamic (mostly climate related factors are included in previous models resulting in appropriate scenarios correlated with different amounts of precipitation, temperature, wind direction etc. Finally, GIS based scenarios are evaluated and tested with field check or very fine resolution Google Earth imagery showing good accuracy. Further development of such GIS models in connection with automatic remote meteorological stations and dynamic satellite imagery (like MODIS will provide on-time warning for coming natural hazard avoiding potential damages or even causalities.

  1. Flood Hazard Mapping using Hydraulic Model and GIS: A Case Study in Mandalay City, Myanmar

    Directory of Open Access Journals (Sweden)

    Kyu Kyu Sein

    2016-01-01

    Full Text Available This paper presents the use of flood frequency analysis integrating with 1D Hydraulic model (HECRAS and Geographic Information System (GIS to prepare flood hazard maps of different return periods in Ayeyarwady River at Mandalay City in Myanmar. Gumbel’s distribution was used to calculate the flood peak of different return periods, namely, 10 years, 20 years, 50 years, and 100 years. The flood peak from frequency analysis were input into HEC-RAS model to find the corresponding flood level and extents in the study area. The model results were used in integrating with ArcGIS to generate flood plain maps. Flood depths and extents have been identified through flood plain maps. Analysis of 100 years return period flood plain map indicated that 157.88 km2 with the percentage of 17.54% is likely to be inundated. The predicted flood depth ranges varies from greater than 0 to 24 m in the flood plains and on the river. The range between 3 to 5 m were identified in the urban area of Chanayetharzan, Patheingyi, and Amarapua Townships. The highest inundated area was 85 km2 in the Amarapura Township.

  2. Identifying model pollutants to investigate biodegradation of hazardous XOCs in WWTPs

    Energy Technology Data Exchange (ETDEWEB)

    Press-Kristensen, Kaare; Ledin, Anna; Schmidt, Jens Ejbye; Henze, Mogens [Department of Environment and Resources, Technical University of Denmark Building 115, 2800 Lyngby (Denmark)

    2007-02-01

    Xenobiotic organic compounds (XOCs) in wastewater treatment plant (WWTP) effluents might cause toxic effects in ecosystems. Several investigations have emphasized biodegradation as an important removal mechanism to reduce pollution with XOCs from WWTP effluents. The aim of the study was to design a screening tool to identify and select hazardous model pollutants for the further investigation of biodegradation in WWTPs. The screening tool consists of three criteria: The XOC is present in WWTP effluents, the XOC constitutes an intolerable risk in drinking water or the environment, and the XOC is expected to be biodegradable in WWTPs. The screening tool was tested on bisphenol A (BPA), carbamazepine (CBZ), di(2ethylhexyl)-phthalate (DEHP), 17{beta}-estradiol (E2), estrone (E1), 17{alpha}-ethinyloetradiol (EE2), ibuprofen, naproxen, nonylphenol (NP), and octylphenol (OP). BPA, DEHP, E2, E1, EE2, and NP passed all criteria in the screening tool and were selected as model pollutants. OP did not pass the filter and was rejected as model pollutant. CBZ, ibuprofen, and naproxen were not finally evaluated due to insufficient data. (author)

  3. Non-Volcanic release of CO2 in Italy: quantification, conceptual models and gas hazard

    Science.gov (United States)

    Chiodini, G.; Cardellini, C.; Caliro, S.; Avino, R.

    2011-12-01

    Central and South Italy are characterized by the presence of many reservoirs naturally recharged by CO2 of deep provenance. In the western sector, the reservoirs feed hundreds of gas emissions at the surface. Many studies in the last years were devoted to (i) elaborating a map of CO2 Earth degassing of the region; (ii) to asses the gas hazard; (iii) to develop methods suitable for the measurement of the gas fluxes from different types of emissions; (iv) to elaborate the conceptual model of Earth degassing and its relation with the seismic activity of the region and (v) to develop physical numerical models of CO2 air dispersion. The main results obtained are: 1) A general, regional map of CO2 Earth degassing in Central Italy has been elaborated. The total flux of CO2 in the area has been estimated in ~ 10 Mt/a which are released to the atmosphere trough numerous dangerous gas emissions or by degassing spring waters (~ 10 % of the CO2 globally estimated to be released by the Earth trough volcanic activity). 2) An on line, open access, georeferenced database of the main CO2 emissions (~ 250) was settled up (http://googas.ov.ingv.it). CO2 flux > 100 t/d characterise 14% of the degassing sites while CO2 fluxes from 100 t/d to 10 t/d have been estimated for about 35% of the gas emissions. 3) The sites of the gas emissions are not suitable for life: the gas causes many accidents to animals and people. In order to mitigate the gas hazard a specific model of CO2 air dispersion has been developed and applied to the main degassing sites. A relevant application regarded Mefite d'Ansanto, southern Apennines, which is the largest natural emission of low temperature CO2 rich gases, from non-volcanic environment, ever measured in the Earth (˜2000 t/d). Under low wind conditions, the gas flows along a narrow natural channel producing a persistent gas river which has killed over a period of time many people and animals. The application of the physical numerical model allowed us to

  4. Conceptual model of volcanism and volcanic hazards of the region of Ararat valley, Armenia

    Science.gov (United States)

    Meliksetian, Khachatur; Connor, Charles; Savov, Ivan; Connor, Laura; Navasardyan, Gevorg; Manucharyan, Davit; Ghukasyan, Yura; Gevorgyan, Hripsime

    2015-04-01

    Armenia and the adjacent volcanically active regions in Iran, Turkey and Georgia are located in the collision zone between the Arabian and Eurasian lithospheric plates. The majority of studies of regional collision related volcanism use the model proposed by Keskin, (2003) where volcanism is driven by Neo-Tethyan slab break-off. In Armenia, >500 Quaternary-Holocene volcanoes from the Gegham, Vardenis and Syunik volcanic fields are hosted within pull-apart structures formed by active faults and their segments (Karakhanyan et al., 2002), while tectonic position of the large in volume basalt-dacite Aragats volcano and periphery volcanic plateaus is different and its position away from major fault lines necessitates more complex volcano-tectonic setup. Our detailed volcanological, petrological and geochemical studies provide insight into the nature of such volcanic activity in the region of Ararat Valley. Most magmas, such as those erupted in Armenia are volatile-poor and erupt fairly hot. Here we report newly discovered tephra sequences in Ararat valley, that were erupted from historically active Ararat stratovolcano and provide evidence for explosive eruption of young, mid K2O calc-alkaline and volatile-rich (>4.6 wt% H2O; amph-bearing) magmas. Such young eruptions, in addition to the ignimbrite and lava flow hazards from Gegham and Aragats, present a threat to the >1.4 million people (~ ½ of the population of Armenia). We will report numerical simulations of potential volcanic hazards for the region of Ararat valley near Yerevan that will include including tephra fallout, lava flows and opening of new vents. Connor et al. (2012) J. Applied Volcanology 1:3, 1-19; Karakhanian et al. (2002), JVGR, 113, 319-344; Keskin, M. (2003) Geophys. Res. Lett. 30, 24, 8046.

  5. Local models for rainstorm-induced hazard analysis on Mediterranean river-torrential geomorphological systems

    Directory of Open Access Journals (Sweden)

    N. Diodato

    2004-01-01

    Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.

  6. Modelling Active Faults in Probabilistic Seismic Hazard Analysis (PSHA) with OpenQuake: Definition, Design and Experience

    Science.gov (United States)

    Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco

    2016-04-01

    The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including

  7. Proportioning of light weight concrete

    DEFF Research Database (Denmark)

    Palmus, Lars

    1996-01-01

    Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory......Development of a method to determine the proportions of the raw materials in light weight concrete made with leight expanded clay aggregate. The method is based on composite theory...

  8. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  9. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  10. Building an Ensemble Seismic Hazard Model for the Magnitude Distribution by Using Alternative Bayesian Implementations

    Science.gov (United States)

    Taroni, M.; Selva, J.

    2017-12-01

    In this work we show how we built an ensemble seismic hazard model for the magnitude distribution for the TSUMAPS-NEAM EU project (http://www.tsumaps-neam.eu/). The considered source area includes the whole NEAM region (North East Atlantic, Mediterranean and connected seas). We build our models by using the catalogs (EMEC and ISC), their completeness and the regionalization provided by the project. We developed four alternative implementations of a Bayesian model, considering tapered or truncated Gutenberg-Richter distributions, and fixed or variable b-value. The frequency size distribution is based on the Weichert formulation. This allows for simultaneously assessing all the frequency-size distribution parameters (a-value, b-value, and corner magnitude), using multiple completeness periods for the different magnitudes. With respect to previous studies, we introduce the tapered Pareto distribution (in addition to the classical truncated Pareto), and we build a novel approach to quantify the prior distribution. For each alternative implementation, we set the prior distributions using the global seismic data grouped according to the different types of tectonic setting, and assigned them to the related regions. The estimation is based on the complete (not declustered) local catalog in each region. Using the complete catalog also allows us to consider foreshocks and aftershocks in the seismic rate computation: the Poissonicity of the tsunami events (and similarly the exceedances of the PGA) will be insured by the Le Cam's theorem. This Bayesian approach provides robust estimations also in the zones where few events are available, but also leaves us the possibility to explore the uncertainty associated with the estimation of the magnitude distribution parameters (e.g. with the classical Metropolis-Hastings Monte Carlo method). Finally we merge all the models with their uncertainty to create the ensemble model that represents our knowledge of the seismicity in the

  11. Combining SLBL routine with landslide-generated tsunami model for a quick hazard assessment tool

    Science.gov (United States)

    Franz, Martin; Rudaz, Benjamin; Jaboyedoff, Michel; Podladchikov, Yury

    2016-04-01

    Regions with steep topography are potentially subject to landslide-induced tsunami, because of the proximity between lakes, rivers, sea shores and potential instabilities. The concentration of the population and infrastructures on the water body shores and downstream valleys could lead to catastrophic consequences. In order to assess comprehensively this phenomenon together with the induced risks, we have developed a tool which allows the construction of the landslide geometry, and which is able to simulate its propagation, the generation and the propagation of the wave and eventually the spread on the shores or the associated downstream flow. The tool is developed in the Matlab© environment, with a graphical user interface (GUI) to select the parameters in a user-friendly manner. The whole process is done in three steps implying different methods. Firstly, the geometry of the sliding mass is constructed using the Sloping Local Base Level (SLBL) concept. Secondly, the propagation of this volume is performed using a model based on viscous flow equations. Finally, the wave generation and its propagation are simulated using the shallow water equations stabilized by the Lax-Friedrichs scheme. The transition between wet and dry bed is performed by the combination of the two latter sets of equations. The intensity map is based on the criterion of flooding in Switzerland provided by the OFEG and results from the multiplication of the velocity and the depth obtained during the simulation. The tool can be used for hazard assessment in the case of well-known landslides, where the SLBL routine can be constrained and checked for realistic construction of the geometrical model. In less-known cases, various failure plane geometries can be automatically built between given range and thus a multi-scenario approach is used. In any case, less-known parameters such as the landslide velocity, its run-out distance, etc. can also be set to vary within given ranges, leading to multi

  12. Hazard Models From Periodic Dike Intrusions at Kı¯lauea Volcano, Hawai`i

    Science.gov (United States)

    Montgomery-Brown, E. K.; Miklius, A.

    2016-12-01

    The persistence and regular recurrence intervals of dike intrusions in the East Rift Zone (ERZ) of Kı¯lauea Volcano lead to the possibility of constructing a time-dependent intrusion hazard model. Dike intrusions are commonly observed in Kı¯lauea Volcano's ERZ and can occur repeatedly in regions that correlate with seismic segments (sections of rift seismicity with persistent definitive lateral boundaries) proposed by Wright and Klein (USGS PP1806, 2014). Five such ERZ intrusions have occurred since 1983 with inferred locations downrift of the bend in Kı¯lauea's ERZ, with the first (1983) being the start of the ongoing ERZ eruption. The ERZ intrusions occur on one of two segments that are spatially coincident with seismic segments: Makaopuhi (1993 and 2007) and Nāpau (1983, 1997, and 2011). During each intrusion, the amount of inferred dike opening was between 2 and 3 meters. The times between ERZ intrusions for same-segment pairs are all close to 14 years: 14.07 (1983-1997), 14.09 (1997-2011), and 13.95 (1993-2007) years, with the Nāpau segment becoming active about 3.5 years after the Makaopuhi segment in each case. Four additional upper ERZ intrusions are also considered here. Dikes in the upper ERZ have much smaller opening ( 10 cm), and have shorter recurrence intervals of 8 years with more variability. The amount of modeled dike opening during each of these events roughly corresponds to the amount of seaward south flank motion and deep rift opening accumulated in the time between events. Additionally, the recurrence interval of 14 years appears to be unaffected by the magma surge of 2003-2007, suggesting that flank motion, rather than magma supply, could be a controlling factor in the timing and periodicity of intrusions. Flank control over the timing of magma intrusions runs counter to the historical research suggesting that dike intrusions at Kı¯lauea are driven by magma overpressure. This relatively free sliding may have resulted from decreased

  13. Hazard avoidance via descent images for safe landing

    Science.gov (United States)

    Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen

    2013-10-01

    In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.

  14. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    International Nuclear Information System (INIS)

    Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey

    2015-01-01

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships

  15. Comparative hazard analysis and toxicological modeling of diverse nanomaterials using the embryonic zebrafish (EZ) metric of toxicity

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)

    2015-06-15

    The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.

  16. Winter wheat response to irrigation, nitrogen fertilization, and cold hazards in the Community Land Model 5

    Science.gov (United States)

    Lu, Y.

    2017-12-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of earth's croplands. As such, it plays an important role in soil carbon balance, and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under changing climate, but also for understanding the energy and water cycles for winter wheat dominated regions. A winter wheat growth model has been developed in the Community Land Model 4.5 (CLM4.5), but its responses to irrigation and nitrogen fertilization have not been validated. In this study, I will validate winter wheat growth response to irrigation and nitrogen fertilization at five winter wheat field sites (TXLU, KSMA, NESA, NDMA, and ABLE) in North America, which were originally designed to understand winter wheat response to nitrogen fertilization and water treatments (4 nitrogen levels and 3 irrigation regimes). I also plan to further update the linkages between winter wheat yield and cold hazards. The previous cold damage function only indirectly affects yield through reduction on leaf area index (LAI) and hence photosynthesis, such approach could sometimes produce an unwanted higher yield when the reduced LAI saved more nutrient in the grain fill stage.

  17. Ranking of several ground-motion models for seismic hazard analysis in Iran

    International Nuclear Information System (INIS)

    Ghasemi, H; Zare, M; Fukushima, Y

    2008-01-01

    In this study, six attenuation relationships are classified with respect to the ranking scheme proposed by Scherbaum et al (2004 Bull. Seismol. Soc. Am. 94 1–22). First, the strong motions recorded during the 2002 Avaj, 2003 Bam, 2004 Kojour and 2006 Silakhor earthquakes are consistently processed. Then the normalized residual sets are determined for each selected ground-motion model, considering the strong-motion records chosen. The main advantage of these records is that corresponding information about the causative fault plane has been well studied for the selected events. Such information is used to estimate several control parameters which are essential inputs for attenuation relations. The selected relations (Zare et al (1999 Soil Dyn. Earthq. Eng. 18 101–23); Fukushima et al (2003 J. Earthq. Eng. 7 573–98); Sinaeian (2006 PhD Thesis International Institute of Earthquake Engineering and Seismology, Tehran, Iran); Boore and Atkinson (2007 PEER, Report 2007/01); Campbell and Bozorgnia (2007 PEER, Report 2007/02); and Chiou and Youngs (2006 PEER Interim Report for USGS Review)) have been deemed suitable for predicting peak ground-motion amplitudes in the Iranian plateau. Several graphical techniques and goodness-of-fit measures are also applied for statistical distribution analysis of the normalized residual sets. Such analysis reveals ground-motion models, developed using Iranian strong-motion records as the most appropriate ones in the Iranian context. The results of the present study are applicable in seismic hazard assessment projects in Iran

  18. A Risk Assessment Model for Water Resources: releases of dangerous and hazardous substances.

    Science.gov (United States)

    Rebelo, Anabela; Ferra, Isabel; Gonçalves, Isolina; Marques, Albertina M

    2014-07-01

    Many dangerous and hazardous substances are used, transported and handled daily in diverse situations, from domestic use to industrial processing, and during those operations, spills or other anomalous situations may occur that can lead to contaminant releases followed by contamination of surface water or groundwater through direct or indirect pathways. When dealing with this problem, rapid, technically sound decisions are desirable, and the use of complex methods may not be able to deliver information quickly. This work describes a simple conceptual model established on multi-criteria based analysis involving a strategic appraisal for contamination risk assessment to support local authorities on rapid technical decisions. The model involves a screening for environmental risk sources, focussing on persistent, bioaccumulative and toxic (PBT) substances that may be discharged into water resources. It is a simple tool that can be used to follow-up actual accident scenarios in real time and to support daily activities, such as site-inspections. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Modeling of hazardous air pollutant removal in the pulsed corona discharge

    International Nuclear Information System (INIS)

    Derakhshesh, Marzie; Abedi, Jalal; Omidyeganeh, Mohammad

    2009-01-01

    This study investigated the effects of two parts of the performance equation of the pulsed corona reactor, which is one of the non-thermal plasma processing tools of atmospheric pressure for eliminating pollutant streams. First, the effect of axial dispersion in the diffusion term and then the effect of different orders of the reaction in the decomposition rate term were considered. The mathematical model was primarily developed to predict the effluent concentration of the pulsed corona reactor using mass balance, and considering axial dispersion, linear velocity and decomposition rate of pollutant. The steady state form of this equation was subsequently solved assuming different reaction orders. For the derivation of the performance equation of the reactor, it was assumed that the decomposition rate of the pollutant was directly proportional to discharge power and the concentration of the pollutant. The results were validated and compared with another predicted model using their experimental data. The model developed in this study was also validated with two other experimental data in the literature for N 2 O

  20. Proportional Symbol Mapping in R

    Directory of Open Access Journals (Sweden)

    Susumu Tanimura

    2006-01-01

    Full Text Available Visualization of spatial data on a map aids not only in data exploration but also in communication to impart spatial conception or ideas to others. Although recent carto-graphic functions in R are rapidly becoming richer, proportional symbol mapping, which is one of the common mapping approaches, has not been packaged thus far. Based on the theories of proportional symbol mapping developed in cartography, the authors developed some functions for proportional symbol mapping using R, including mathematical and perceptual scaling. An example of these functions demonstrated the new expressive power and options available in R, particularly for the visualization of conceptual point data.

  1. Optical fusions and proportional syntheses

    Science.gov (United States)

    Albert-Vanel, Michel

    2002-06-01

    A tragic error is being made in the literature concerning matters of color when dealing with optical fusions. They are still considered to be of additive nature, whereas experience shows us somewhat different results. The goal of this presentation is to show that fusions are, in fact, of 'proportional' nature, tending to be additive or subtractive, depending on each individual case. Using the pointillist paintings done in the manner of Seurat, or the spinning discs experiment could highlight this intermediate sector of the proportional. So, let us try to examine more closely what occurs in fact, by reviewing additive, subtractive and proportional syntheses.

  2. Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)

    Energy Technology Data Exchange (ETDEWEB)

    Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)

    2009-05-15

    The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)

  3. Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)

    International Nuclear Information System (INIS)

    Musson, R. M. W.; Sellami, S.; Bruestle, W.

    2009-01-01

    The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)

  4. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  5. Association of Dietary Proportions of Macronutrients with Visceral Adiposity Index: Non-Substitution and Iso-Energetic Substitution Models in a Prospective Study.

    Science.gov (United States)

    Moslehi, Nazanin; Ehsani, Behnaz; Mirmiran, Parvin; Hojjat, Parvane; Azizi, Fereidoun

    2015-10-26

    We aimed to investigate associations between dietary macronutrient proportions and prospective visceral adiposity index changes (ΔVAI). The study included 1254 adults (18-74 years), from the Tehran Lipid and Glucose Study (TLGS), who were followed for three years. Dietary intakes were assessed twice using food frequency questionnaires. Associations of dietary macronutrient with ΔVAI and risk of visceral adiposity dysfunction (VAD) after three years were investigated. The percentage of energy intake from protein in the total population, and from fat in women, were associated with higher increases in VAI. A 5% higher energy intake from protein substituted for carbohydrate, monounsaturated fatty acids (MUFAs), and polyunsaturated fatty acids (PUFAs) was associated with higher ΔVAI. Higher energy intake from animal protein substituted for PUFAs was positively associated with ΔVAI. Substituting protein and PUFAs with MUFAs were related to higher ΔVAI. The associations were similar in men and women, but reached significance mostly among women. Risk of VAD was increased when 1% of energy from protein was replaced with MUFAs. Substituting protein for carbohydrate and fat, and fat for carbohydrate, resulted in increased risk of VAD in women. Higher dietary proportions of protein and animal-derived MUFA may be positively associated with ΔVAI and risk of VAD.

  6. Proportional counter end effects eliminator

    International Nuclear Information System (INIS)

    Meekins, J.F.

    1976-01-01

    An improved gas-filled proportional counter which includes a resistor network connected between the anode and cathode at the ends of the counter in order to eliminate ''end effects'' is described. 3 Claims, 2 Drawing Figures

  7. STakeholder-Objective Risk Model (STORM): Determiningthe aggregated risk of multiple contaminant hazards in groundwater well catchments

    DEFF Research Database (Denmark)

    Enzenhoefer, R.; Binning, Philip John; Nowak, W.

    2015-01-01

    Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any......-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired...

  8. Electronics for proportional drift tubes

    International Nuclear Information System (INIS)

    Fremont, G.; Friend, B.; Mess, K.H.; Schmidt-Parzefall, W.; Tarle, J.C.; Verweij, H.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Geske, K.; Riege, H.; Schuett, J.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration); Semenov, Y.; CERN-Hamburg-Amsterdam-Rome-Moscow Collaboration)

    1980-01-01

    An electronic system for the read-out of a large number of proportional drift tubes (16,000) has been designed. This system measures deposited charge and drift-time of the charge of a particle traversing a proportional drift tube. A second event can be accepted during the read-out of the system. Up to 40 typical events can be collected and buffered before a data transfer to a computer is necessary. (orig.)

  9. Landslide hazard assessment along a mountain highway in the Indian Himalayan Region (IHR) using remote sensing and computational models

    Science.gov (United States)

    Krishna, Akhouri P.; Kumar, Santosh

    2013-10-01

    Landslide hazard assessments using computational models, such as artificial neural network (ANN) and frequency ratio (FR), were carried out covering one of the important mountain highways in the Central Himalaya of Indian Himalayan Region (IHR). Landslide influencing factors were either calculated or extracted from spatial databases including recent remote sensing data of LANDSAT TM, CARTOSAT digital elevation model (DEM) and Tropical Rainfall Measuring Mission (TRMM) satellite for rainfall data. ANN was implemented using the multi-layered feed forward architecture with different input, output and hidden layers. This model based on back propagation algorithm derived weights for all possible parameters of landslides and causative factors considered. The training sites for landslide prone and non-prone areas were identified and verified through details gathered from remote sensing and other sources. Frequency Ratio (FR) models are based on observed relationships between the distribution of landslides and each landslide related factor. FR model implementation proved useful for assessing the spatial relationships between landslide locations and factors contributing to its occurrence. Above computational models generated respective susceptibility maps of landslide hazard for the study area. This further allowed the simulation of landslide hazard maps on a medium scale using GIS platform and remote sensing data. Upon validation and accuracy checks, it was observed that both models produced good results with FR having some edge over ANN based mapping. Such statistical and functional models led to better understanding of relationships between the landslides and preparatory factors as well as ensuring lesser levels of subjectivity compared to qualitative approaches.

  10. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  11. Modelling short term individual exposure from airborne hazardous releases in urban environments

    International Nuclear Information System (INIS)

    Bartzis, J.G.; Efthimiou, G.C.; Andronopoulos, S.

    2015-01-01

    Highlights: • The statistical behavior of the variability of individual exposure is described with a beta function. • The extreme value in the beta function is properly addressed by [5] correlation. • Two different datasets gave clear support to the proposed novel theory and its hypotheses. - Abstract: A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a significant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the first attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the field Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model refinements.

  12. Industry-specific risk models for numerical scoring of hazards and prioritization of safety measures

    International Nuclear Information System (INIS)

    Khali, Y.F.; Johnson, K.

    2004-01-01

    Risk analysis consists of five cornerstones that have to be viewed in an holistic manner by risk practitioners of any organization regardless of the industry type or nature of its critical infrastructures. The cornerstones are hazard identification, risk assessment and consequence analysis, determination of risk management actions required to reduce risks to acceptable levels, communication of risk insights among the stake-holders, and continuous monitoring and verification to ensure sustained attainment of tolerable risk levels. Our primary objectives in this research are two fold: first, we compare and contrast a wide spectrum of current industry-specific and application-dependent semi-quantitative risk models. Secondly, based on the insights to be gained from the first task, we propose a framework for a robust risk-based approach for conducting security vulnerability assessment (SVA). Risk practitioners of critical infrastructures, such as commercial nuclear power plants, water utilities, chemical plants, transmission and distribution substations... etc., could readily use this proposed approach to classify, evaluate, and prioritize risks to support allocation of resources required to ensure protection of public health and safety. (author)

  13. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  14. Modelling short term individual exposure from airborne hazardous releases in urban environments

    Energy Technology Data Exchange (ETDEWEB)

    Bartzis, J.G., E-mail: bartzis@uowm.gr [University of Western Macedonia, Dept. of Mechanical Engineering, Sialvera & Bakola Str., 50100, Kozani (Greece); Efthimiou, G.C.; Andronopoulos, S. [Environmental Research Laboratory, INRASTES, NCSR Demokritos, Patriarchou Grigoriou & Neapoleos Str., 15310, Aghia Paraskevi (Greece)

    2015-12-30

    Highlights: • The statistical behavior of the variability of individual exposure is described with a beta function. • The extreme value in the beta function is properly addressed by [5] correlation. • Two different datasets gave clear support to the proposed novel theory and its hypotheses. - Abstract: A key issue, in order to be able to cope with deliberate or accidental atmospheric releases of hazardous substances, is the ability to reliably predict the individual exposure downstream the source. In many situations, the release time and/or the health relevant exposure time is short compared to mean concentration time scales. In such a case, a significant scatter of exposure levels is expected due to the stochastic nature of turbulence. The problem becomes even more complex when dispersion occurs over urban environments. The present work is the first attempt to approximate on generic terms, the statistical behavior of the abovementioned variability with a beta distribution probability density function (beta-pdf) which has proved to be quite successful. The important issue of the extreme concentration value in beta-pdf seems to be properly addressed by the [5] correlation in which global values of its associated constants are proposed. Two substantially different datasets, the wind tunnel Michelstadt experiment and the field Mock Urban Setting Trial (MUST) experiment gave clear support to the proposed novel theory and its hypotheses. In addition, the present work can be considered as basis for further investigation and model refinements.

  15. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  16. Implications of different digital elevation models and preprocessing techniques to delineate debris flow inundation hazard zones in El Salvador

    Science.gov (United States)

    Anderson, E. R.; Griffin, R.; Irwin, D.

    2013-12-01

    Heavy rains and steep, volcanic slopes in El Salvador cause numerous landslides every year, posing a persistent threat to the population, economy and environment. Although potential debris inundation hazard zones have been delineated using digital elevation models (DEMs), some disparities exist between the simulated zones and actual affected areas. Moreover, these hazard zones have only been identified for volcanic lahars and not the shallow landslides that occur nearly every year. This is despite the availability of tools to delineate a variety of landslide types (e.g., the USGS-developed LAHARZ software). Limitations in DEM spatial resolution, age of the data, and hydrological preprocessing techniques can contribute to inaccurate hazard zone definitions. This study investigates the impacts of using different elevation models and pit filling techniques in the final debris hazard zone delineations, in an effort to determine which combination of methods most closely agrees with observed landslide events. In particular, a national DEM digitized from topographic sheets from the 1970s and 1980s provide an elevation product at a 10 meter resolution. Both natural and anthropogenic modifications of the terrain limit the accuracy of current landslide hazard assessments derived from this source. Global products from the Shuttle Radar Topography Mission (SRTM) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global DEM (ASTER GDEM) offer more recent data but at the cost of spatial resolution. New data derived from the NASA Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) in 2013 provides the opportunity to update hazard zones at a higher spatial resolution (approximately 6 meters). Hydrological filling of sinks or pits for current hazard zone simulation has previously been achieved through ArcInfo spatial analyst. Such hydrological processing typically only fills pits and can lead to drastic modifications of original elevation values

  17. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    Science.gov (United States)

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  19. WCSPH with Limiting Viscosity for Modeling Landslide Hazard at the Slopes of Artificial Reservoir

    Directory of Open Access Journals (Sweden)

    Sauro Manenti

    2018-04-01

    Full Text Available This work illustrated an application of the FOSS code SPHERA v.8.0 (RSE SpA, Milano, Italy to the simulation of landslide hazard at the slope of a water basin. SPHERA is based on the weakly compressible SPH method (WCSPH and holds a mixture model, consistent with the packing limit of the Kinetic Theory of Granular Flow (KTGF, which was previously tested for simulating two-phase free-surface rapid flows involving water-sediment interaction. In this study a limiting viscosity parameter was implemented in the previous formulation of the mixture model to limit the growth of the apparent viscosity, thus saving computational time while preserving the solution accuracy. This approach is consistent with the experimental behavior of high polymer solutions for which an almost constant value of viscosity may be approached at very low deformation rates near the transition zone of elastic–plastic regime. In this application, the limiting viscosity was used as a numerical parameter for optimization of the computation. Some preliminary tests were performed by simulating a 2D erosional dam break, proving that a proper selection of the limiting viscosity leads to a considerable drop of the computational time without altering significantly the numerical solution. SPHERA was then validated by simulating a 2D scale experiment reproducing the early phase of the Vajont landslide when a tsunami wave was generated that climbed the opposite mountain side with a maximum run-up of about 270 m. The obtained maximum run-up was very close to the experimental result. Influence of saturation of the landslide material below the still water level was also accounted, showing that the landslide dynamics can be better represented and the wave run-up can be properly estimated.

  20. A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard

    Science.gov (United States)

    Alaeddine, H.; Serrhini, K.; Maizia, M.

    2015-03-01

    Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.

  1. Numerical modeling of debris avalanches at Nevado de Toluca (Mexico): implications for hazard evaluation and mapping

    Science.gov (United States)

    Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.

    2007-05-01

    The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations

  2. The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.

    2005-12-01

    The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy

  3. Flow-R, a model for susceptibility mapping of debris flows and other gravitational hazards at a regional scale

    Directory of Open Access Journals (Sweden)

    P. Horton

    2013-04-01

    Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time

  4. Development of multiwire proportional chambers

    CERN Multimedia

    Charpak, G

    1969-01-01

    It has happened quite often in the history of science that theoreticians, confronted with some major difficulty, have successfully gone back thirty years to look at ideas that had then been thrown overboard. But it is rare that experimentalists go back thirty years to look again at equipment which had become out-dated. This is what Charpak and his colleagues did to emerge with the 'multiwire proportional chamber' which has several new features making it a very useful addition to the armoury of particle detectors. In the 1930s, ion-chambers, Geiger- Muller counters and proportional counters, were vital pieces of equipment in nuclear physics research. Other types of detectors have since largely replaced them but now the proportional counter, in new array, is making a comeback.

  5. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  6. GIS-modelling of the spatial variability of flash flood hazard in Abu Dabbab catchment, Red Sea Region, Egypt

    Directory of Open Access Journals (Sweden)

    Islam Abou El-Magd

    2010-06-01

    Full Text Available In the mountainous area of the Red Sea region in southeastern Egypt, the development of new mining activities or/and domestic infrastructures require reliable and accurate information about natural hazards particularly flash flood. This paper presents the assessment of flash flood hazards in the Abu Dabbab drainage basin. Remotely sensed data were used to delineate the alluvial active channels, which were integrated with morphometric parameters extracted from digital elevation models (DEM into geographical information systems (GIS to construct a hydrological model that provides estimates about the amount of surface runoff as well as the magnitude of flash floods. The peak discharge is randomly varied at different cross-sections along the main channel. Under consistent 10 mm rainfall event, the selected cross-section in middle of the main channel is prone to maximum water depth at 80 cm, which decreases to nearly 30 cm at the outlet due to transmission loss. The estimation of spatial variability of flow parameters within the catchment at different confluences of the constituting sub-catchments can be considered and used in planning for engineering foundations and linear infrastructures with the least flash flood hazard. Such information would, indeed, help decision makers and planning to minimize such hazards.

  7. Mathematical Decision Models Applied for Qualifying and Planning Areas Considering Natural Hazards and Human Dealing

    Science.gov (United States)

    Anton, Jose M.; Grau, Juan B.; Tarquis, Ana M.; Sanchez, Elena; Andina, Diego

    2014-05-01

    The authors were involved in the use of some Mathematical Decision Models, MDM, to improve knowledge and planning about some large natural or administrative areas for which natural soils, climate, and agro and forest uses where main factors, but human resources and results were important, natural hazards being relevant. In one line they have contributed about qualification of lands of the Community of Madrid, CM, administrative area in centre of Spain containing at North a band of mountains, in centre part of Iberian plateau and river terraces, and also Madrid metropolis, from an official study of UPM for CM qualifying lands using a FAO model from requiring minimums of a whole set of Soil Science criteria. The authors set first from these criteria a complementary additive qualification, and tried later an intermediate qualification from both using fuzzy logic. The authors were also involved, together with colleagues from Argentina et al. that are in relation with local planners, for the consideration of regions and of election of management entities for them. At these general levels they have adopted multi-criteria MDM, used a weighted PROMETHEE, and also an ELECTRE-I with the same elicited weights for the criteria and data, and at side AHP using Expert Choice from parallel comparisons among similar criteria structured in two levels. The alternatives depend on the case study, and these areas with monsoon climates have natural hazards that are decisive for their election and qualification with an initial matrix used for ELECTRE and PROMETHEE. For the natural area of Arroyos Menores at South of Rio Cuarto town, with at North the subarea of La Colacha, the loess lands are rich but suffer now from water erosions forming regressive ditches that are spoiling them, and use of soils alternatives must consider Soil Conservation and Hydraulic Management actions. The use of soils may be in diverse non compatible ways, as autochthonous forest, high value forest, traditional

  8. Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL

    Science.gov (United States)

    Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María

    2016-04-01

    The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from

  9. Subduction zone and crustal dynamics of western Washington; a tectonic model for earthquake hazards evaluation

    Science.gov (United States)

    Stanley, Dal; Villaseñor, Antonio; Benz, Harley

    1999-01-01

    The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This

  10. System Dynamics Model to develop resilience management strategies for lifelines exposed to natural hazards

    Science.gov (United States)

    Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele

    2016-04-01

    Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters

  11. Saving Money Using Proportional Reasoning

    Science.gov (United States)

    de la Cruz, Jessica A.; Garney, Sandra

    2016-01-01

    It is beneficial for students to discover intuitive strategies, as opposed to the teacher presenting strategies to them. Certain proportional reasoning tasks are more likely to elicit intuitive strategies than other tasks. The strategies that students are apt to use when approaching a task, as well as the likelihood of a student's success or…

  12. Assessment of Debris Flow Potential Hazardous Zones Using Numerical Models in the Mountain Foothills of Santiago, Chile

    Science.gov (United States)

    Celis, C.; Sepulveda, S. A.; Castruccio, A.; Lara, M.

    2017-12-01

    Debris and mudflows are some of the main geological hazards in the mountain foothills of Central Chile. The risk of flows triggered in the basins of ravines that drain the Andean frontal range into the capital city, Santiago, increases with time due to accelerated urban expansion. Susceptibility assessments were made by several authors to detect the main active ravines in the area. Macul and San Ramon ravines have a high to medium debris flow susceptibility, whereas Lo Cañas, Apoquindo and Las Vizcachas ravines have a medium to low debris flow susceptibility. This study emphasizes in delimiting the potential hazardous zones using the numerical simulation program RAMMS-Debris Flows with the Voellmy model approach, and the debris-flow model LAHARZ. This is carried out by back-calculating the frictional parameters in the depositional zone with a known event as the debris and mudflows in Macul and San Ramon ravines, on May 3rd, 1993, for the RAMMS approach. In the same scenario, we calibrate the coefficients to match conditions of the mountain foothills of Santiago for the LAHARZ model. We use the information obtained for every main ravine in the study area, mainly for the similarity in slopes and material transported. Simulations were made for the worst-case scenario, caused by the combination of intense rainfall storms, a high 0°C isotherm level and material availability in the basins where the flows are triggered. The results show that the runout distances are well simulated, therefore a debris-flow hazard map could be developed with these models. Correlation issues concerning the run-up, deposit thickness and transversal areas are reported. Hence, the models do not represent entirely the complexity of the phenomenon, but they are a reliable approximation for preliminary hazard maps.

  13. Hydrology Analysis and Modelling for Klang River Basin Flood Hazard Map

    Science.gov (United States)

    Sidek, L. M.; Rostam, N. E.; Hidayah, B.; Roseli, ZA; Majid, W. H. A. W. A.; Zahari, N. Z.; Salleh, S. H. M.; Ahmad, R. D. R.; Ahmad, M. N.

    2016-03-01

    Flooding, a common environmental hazard worldwide has in recent times, increased as a result of climate change and urbanization with the effects felt more in developing countries. As a result, the explosive of flooding to Tenaga Nasional Berhad (TNB) substation is increased rapidly due to existing substations are located in flood prone area. By understanding the impact of flood to their substation, TNB has provided the non-structure mitigation with the integration of Flood Hazard Map with their substation. Hydrology analysis is the important part in providing runoff as the input for the hydraulic part.

  14. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  15. Wind vs Water in Hurricanes: The Challenge of Multi-peril Hazard Modeling

    Science.gov (United States)

    Powell, M. D.

    2017-12-01

    operational solution to collect wind and water level measurements, and to conduct observation based modeling of wind and water impacts. My presentation will discuss some of the challenges to wind and water hazard monitoring and modeling.

  16. The Origins of Scintillator Non-Proportionality

    Science.gov (United States)

    Moses, W. W.; Bizarri, G. A.; Williams, R. T.; Payne, S. A.; Vasil'ev, A. N.; Singh, J.; Li, Q.; Grim, J. Q.; Choong, W.-S.

    2012-10-01

    Recent years have seen significant advances in both theoretically understanding and mathematically modeling the underlying causes of scintillator non-proportionality. The core cause is that the interaction of radiation with matter invariably leads to a non-uniform ionization density in the scintillator, coupled with the fact that the light yield depends on the ionization density. The mechanisms that lead to the luminescence dependence on ionization density are incompletely understood, but several important features have been identified, notably Auger-like processes (where two carriers of excitation interact with each other, causing one to de-excite non-radiatively), the inability of excitation carriers to recombine (caused either by trapping or physical separation), and the carrier mobility. This paper reviews the present understanding of the fundamental origins of scintillator non-proportionality, specifically the various theories that have been used to explain non-proportionality.

  17. Disease proportions attributable to environment

    Directory of Open Access Journals (Sweden)

    Vineis Paolo

    2007-11-01

    Full Text Available Abstract Population disease proportions attributable to various causal agents are popular as they present a simplified view of the contribution of each agent to the disease load. However they are only summary figures that may be easily misinterpreted or over-interpreted even when the causal link between an exposure and an effect is well established. This commentary discusses several issues surrounding the estimation of attributable proportions, particularly with reference to environmental causes of cancers, and critically examines two recently published papers. These issues encompass potential biases as well as the very definition of environment and of environmental agent. The latter aspect is not just a semantic question but carries implications for the focus of preventive actions, whether centred on the material and social environment or on single individuals.

  18. Spatial Modelling of Urban Physical Vulnerability to Explosion Hazards Using GIS and Fuzzy MCDA

    Directory of Open Access Journals (Sweden)

    Yasser Ebrahimian Ghajari

    2017-07-01

    Full Text Available Most of the world’s population is concentrated in accumulated spaces in the form of cities, making the concept of urban planning a significant issue for consideration by decision makers. Urban vulnerability is a major issue which arises in urban management, and is simply defined as how vulnerable various structures in a city are to different hazards. Reducing urban vulnerability and enhancing resilience are considered to be essential steps towards achieving urban sustainability. To date, a vast body of literature has focused on investigating urban systems’ vulnerabilities with regard to natural hazards. However, less attention has been paid to vulnerabilities resulting from man-made hazards. This study proposes to investigate the physical vulnerability of buildings in District 6 of Tehran, Iran, with respect to intentional explosion hazards. A total of 14 vulnerability criteria are identified according to the opinions of various experts, and standard maps for each of these criteria have been generated in a GIS environment. Ultimately, an ordered weighted averaging (OWA technique was applied to generate vulnerability maps for different risk conditions. The results of the present study indicate that only about 25 percent of buildings in the study area have a low level of vulnerability under moderate risk conditions. Sensitivity analysis further illustrates the robustness of the results obtained. Finally, the paper concludes by arguing that local authorities must focus more on risk-reduction techniques in order to reduce physical vulnerability and achieve urban sustainability.

  19. International conference and workshop on modeling and mitigating the consequences of accidental releases of hazardous materials

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    This conference was held September 26--29, 1995 in New Orleans, Louisiana. The purpose of this conference was to provide a multidisciplinary forum for exchange of state-of-the-art information on the consequences of accidental releases of hazardous materials. Attention is focused on air dispersion of vapors. Individual papers have been processed separately for inclusion in the appropriate data bases

  20. Incorporating fine-scale drought information into an eastern US wildfire hazard model

    Science.gov (United States)

    Matthew P. Peters; Louis R. Iverson

    2017-01-01

    Wildfires in the eastern United States are generally caused by humans in locations where human development and natural vegetation intermingle, e.g. the wildland–urban interface (WUI). Knowing where wildfire hazards are elevated across the forested landscape may help land managers and property owners plan or allocate resources for potential wildfire threats. In an...

  1. Modelling risk in high hazard operations : Integrating technical, organisational and cultural factors

    NARCIS (Netherlands)

    Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.

    2012-01-01

    Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate

  2. A Real-Time Construction Safety Monitoring System for Hazardous Gas Integrating Wireless Sensor Network and Building Information Modeling Technologies.

    Science.gov (United States)

    Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng

    2018-02-02

    In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.

  3. Hazardous Waste

    Science.gov (United States)

    ... chemicals can still harm human health and the environment. When you throw these substances away, they become hazardous waste. Some hazardous wastes come from products in our homes. Our garbage can include such hazardous wastes as old batteries, bug spray cans and paint thinner. U.S. residents ...

  4. A hydro-sedimentary modeling system for flash flood propagation and hazard estimation under different agricultural practices

    Science.gov (United States)

    Kourgialas, N. N.; Karatzas, G. P.

    2014-03-01

    A modeling system for the estimation of flash flood flow velocity and sediment transport is developed in this study. The system comprises three components: (a) a modeling framework based on the hydrological model HSPF, (b) the hydrodynamic module of the hydraulic model MIKE 11 (quasi-2-D), and (c) the advection-dispersion module of MIKE 11 as a sediment transport model. An important parameter in hydraulic modeling is the Manning's coefficient, an indicator of the channel resistance which is directly dependent on riparian vegetation changes. Riparian vegetation's effect on flood propagation parameters such as water depth (inundation), discharge, flow velocity, and sediment transport load is investigated in this study. Based on the obtained results, when the weed-cutting percentage is increased, the flood wave depth decreases while flow discharge, velocity and sediment transport load increase. The proposed modeling system is used to evaluate and illustrate the flood hazard for different riparian vegetation cutting scenarios. For the estimation of flood hazard, a combination of the flood propagation characteristics of water depth, flow velocity and sediment load was used. Next, a well-balanced selection of the most appropriate agricultural cutting practices of riparian vegetation was performed. Ultimately, the model results obtained for different agricultural cutting practice scenarios can be employed to create flood protection measures for flood-prone areas. The proposed methodology was applied to the downstream part of a small Mediterranean river basin in Crete, Greece.

  5. Introducing Geoscience Students to Numerical Modeling of Volcanic Hazards: The example of Tephra2 on VHub.org

    Directory of Open Access Journals (Sweden)

    Leah M. Courtland

    2012-07-01

    Full Text Available The Tephra2 numerical model for tephra fallout from explosive volcanic eruptions is specifically designed to enable students to probe ideas in model literacy, including code validation and verification, the role of simplifying assumptions, and the concepts of uncertainty and forecasting. This numerical model is implemented on the VHub.org website, a venture in cyberinfrastructure that brings together volcanological models and educational materials. The VHub.org resource provides students with the ability to explore and execute sophisticated numerical models like Tephra2. We present a strategy for using this model to introduce university students to key concepts in the use and evaluation of Tephra2 for probabilistic forecasting of volcanic hazards. Through this critical examination students are encouraged to develop a deeper understanding of the applicability and limitations of hazard models. Although the model and applications are intended for use in both introductory and advanced geoscience courses, they could easily be adapted to work in other disciplines, such as astronomy, physics, computational methods, data analysis, or computer science.

  6. PEP quark search proportional chambers

    Energy Technology Data Exchange (ETDEWEB)

    Parker, S I; Harris, F; Karliner, I; Yount, D [Hawaii Univ., Honolulu (USA); Ely, R; Hamilton, R; Pun, T [California Univ., Berkeley (USA). Lawrence Berkeley Lab.; Guryn, W; Miller, D; Fries, R [Northwestern Univ., Evanston, IL (USA)

    1981-04-01

    Proportional chambers are used in the PEP Free Quark Search to identify and remove possible background sources such as particles traversing the edges of counters, to permit geometric corrections to the dE/dx and TOF information from the scintillator and Cerenkov counters, and to look for possible high cross section quarks. The present beam pipe has a thickness of 0.007 interaction lengths (lambdasub(i)) and is followed in both arms each with 45/sup 0/ <= theta <= 135/sup 0/, ..delta..phi=90/sup 0/ by 5 proportional chambers, each 0.0008 lambdasub(i) thick with 32 channels of pulse height readout, and by 3 thin scintillator planes, each 0.003 lambdasub(i) thick. Following this thin front end, each arm of the detector has 8 layers of scintillator (one with scintillating light pipes) interspersed with 4 proportional chambers and a layer of lucite Cerenkov counters. Both the calculated ion statistics and measurements using He-CH/sub 4/ gas in a test chamber indicate that the chamber efficiencies should be >98% for q=1/3. The Landau spread measured in the test was equal to that observed for normal q=1 traversals. One scintillator plane and thin chamber in each arm will have an extra set of ADC's with a wide gate bracketing the normal one so timing errors and tails of earlier pulses should not produce fake quarks.

  7. Assessment of groundwater contamination risk using hazard quantification, a modified DRASTIC model and groundwater value, Beijing Plain, China.

    Science.gov (United States)

    Wang, Junjie; He, Jiangtao; Chen, Honghan

    2012-08-15

    Groundwater contamination risk assessment is an effective tool for groundwater management. Most existing risk assessment methods only consider the basic contamination process based upon evaluations of hazards and aquifer vulnerability. In view of groundwater exploitation potentiality, including the value of contamination-threatened groundwater could provide relatively objective and targeted results to aid in decision making. This study describes a groundwater contamination risk assessment method that integrates hazards, intrinsic vulnerability and groundwater value. The hazard harmfulness was evaluated by quantifying contaminant properties and infiltrating contaminant load, the intrinsic aquifer vulnerability was evaluated using a modified DRASTIC model and the groundwater value was evaluated based on groundwater quality and aquifer storage. Two groundwater contamination risk maps were produced by combining the above factors: a basic risk map and a value-weighted risk map. The basic risk map was produced by overlaying the hazard map and the intrinsic vulnerability map. The value-weighted risk map was produced by overlaying the basic risk map and the groundwater value map. Relevant validation was completed by contaminant distributions and site investigation. Using Beijing Plain, China, as an example, thematic maps of the three factors and the two risks were generated. The thematic maps suggested that landfills, gas stations and oil depots, and industrial areas were the most harmful potential contamination sources. The western and northern parts of the plain were the most vulnerable areas and had the highest groundwater value. Additionally, both the basic and value-weighted risk classes in the western and northern parts of the plain were the highest, indicating that these regions should deserve the priority of concern. Thematic maps should be updated regularly because of the dynamic characteristics of hazards. Subjectivity and validation means in assessing the

  8. Combining slope stability and groundwater flow models to assess stratovolcano collapse hazard

    Science.gov (United States)

    Ball, J. L.; Taron, J.; Reid, M. E.; Hurwitz, S.; Finn, C.; Bedrosian, P.

    2016-12-01

    Flank collapses are a well-documented hazard at volcanoes. Elevated pore-fluid pressures and hydrothermal alteration are invoked as potential causes for the instability in many of these collapses. Because pore pressure is linked to water saturation and permeability of volcanic deposits, hydrothermal alteration is often suggested as a means of creating low-permeability zones in volcanoes. Here, we seek to address the question: What alteration geometries will produce elevated pore pressures in a stratovolcano, and what are the effects of these elevated pressures on slope stability? We initially use a finite element groundwater flow model (a modified version of OpenGeoSys) to simulate `generic' stratovolcano geometries that produce elevated pore pressures. We then input these results into the USGS slope-stability code Scoops3D to investigate the effects of alteration and magmatic intrusion on potential flank failure. This approach integrates geophysical data about subsurface alteration, water saturation and rock mechanical properties with data about precipitation and heat influx at Cascade stratovolcanoes. Our simulations show that it is possible to maintain high-elevation water tables in stratovolcanoes given specific ranges of edifice permeability (ideally between 10-15 and 10-16 m2). Low-permeability layers (10-17 m2, representing altered pyroclastic deposits or altered breccias) in the volcanoes can localize saturated regions close to the surface, but they may actually reduce saturation, pore pressures, and water table levels in the core of the volcano. These conditions produce universally lower factor-of-safety (F) values than at an equivalent dry edifice with the same material properties (lower values of F indicate a higher likelihood of collapse). When magmatic intrusions into the base of the cone are added, near-surface pore pressures increase and F decreases exponentially with time ( 7-8% in the first year). However, while near-surface impermeable layers

  9. Proportion of various dendromass components of spruce (Picea abies), and partial models for modification of wind speed and radiation by pure spruce stands

    International Nuclear Information System (INIS)

    Wollmerstädt, J.; Sharma, S.C.; Marsch, M.

    1992-01-01

    Means for quantifying dendromass components of spruce stands have been discussed, and partial models for modification of radiation and wind by the pure spruce stand were developed. By means of a sampling procedure, the components needle dry mass and branchwood dry mass without needles of individual trees are recorded. Using the relationship between branch basal diameter and needle respectively branchwood dry mass, the total needle and branchwood dry mass of trees is estimated. Based on that, stand or regional parameters for the allometric function between diameter breast height and needle respectively branchwood dry mass can be determined for defined H/D-clusters. Published data from various sources were used in this paper. The lowest coefficients of determination were found in H/D-cluster 120 (H/D-values over 114). Therefore, further differentiation within this range seems to be necessary. For assimilation models, there should be quantification of needle dry mass separately for needle age classes and morphological characteristics of needles. Basis for the estimate of tree-bole volume is the relationship between H/D-value and oven-dry weight. There are problems as far as methods for quantifying the subterranean dendromass (e.g. dynamics of fine roots) are concerned; this is requiring considerable efforts, too. Spatial structure was also described by allometric functions (crown length and crown cover in relation to diameter breast height). For the partial model to express wind modification by the stand, standardized wind profiles as related to crown canopy density were used. The modification of radiation by the stand is closely related with the vertical needle mass distribution (sum curves). These two partial models have to be considered as an approach for the description of the modifying effect by the stocking [de

  10. Utilizing NASA Earth Observations to Model Volcanic Hazard Risk Levels in Areas Surrounding the Copahue Volcano in the Andes Mountains

    Science.gov (United States)

    Keith, A. M.; Weigel, A. M.; Rivas, J.

    2014-12-01

    Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.

  11. Modeling of mechanical response of NiTi shape memory alloy subjected to combined thermal and non-proportional mechanical loading: A case study on helical spring actuator

    Czech Academy of Sciences Publication Activity Database

    Frost, Miroslav; Sedlák, Petr; Kadeřávek, Lukáš; Heller, Luděk; Šittner, Petr

    2016-01-01

    Roč. 27, č. 14 (2016), s. 1927-1938 ISSN 1045-389X R&D Projects: GA ČR(CZ) GP14-28306P; GA ČR GA14-15264S; GA ČR GAP107/12/0800 Institutional support: RVO:61388998 ; RVO:68378271 Keywords : shape memory alloys * R-phase * modeling * elastic anisotropy * helical spring Subject RIV: BM - Solid Matter Physics ; Magnetism; BM - Solid Matter Physics ; Magnetism (FZU-D) Impact factor: 2.255, year: 2016 http://jim.sagepub.com/content/27/14/1927.full.pdf

  12. A model used to derive hazardous waste concentration limits aiming at the reduction of toxic and hazardous wastes. Applications to illustrate the discharge of secondary categories types B and C

    International Nuclear Information System (INIS)

    Paris, P.

    1989-11-01

    This report describes a model which may be used to derive hazardous waste concentration limits in order to prevent ground water pollution from a landfill disposal. First the leachate concentration limits are determined taking into account the attenuation capacity of the landfill-site as a whole; waste concentrations are then derived by an elution model which assumes a constant ratio between liquid-solid concentrations. In the example two types of landfill have been considered and in each case concentration limits have been calculated for some hazardous substances and compared with the corresponding regulatory limits. (author)

  13. Modeling retrospective attribution of responsibility to hazard-managing institutions: an example involving a food contamination incident.

    Science.gov (United States)

    Johnson, Branden B; Hallman, William K; Cuite, Cara L

    2015-03-01

    Perceptions of institutions that manage hazards are important because they can affect how the public responds to hazard events. Antecedents of trust judgments have received far more attention than antecedents of attributions of responsibility for hazard events. We build upon a model of retrospective attribution of responsibility to individuals to examine these relationships regarding five classes of institutions that bear responsibility for food safety: producers (e.g., farmers), processors (e.g., packaging firms), watchdogs (e.g., government agencies), sellers (e.g., supermarkets), and preparers (e.g., restaurants). A nationally representative sample of 1,200 American adults completed an Internet-based survey in which a hypothetical scenario involving contamination of diverse foods with Salmonella served as the stimulus event. Perceived competence and good intentions of the institution moderately decreased attributions of responsibility. A stronger factor was whether an institution was deemed (potentially) aware of the contamination and free to act to prevent or mitigate it. Responsibility was rated higher the more aware and free the institution. This initial model for attributions of responsibility to impersonal institutions (as opposed to individual responsibility) merits further development. © 2014 Society for Risk Analysis.

  14. Evaluating a multi-criteria model for hazard assessment in urban design. The Porto Marghera case study

    International Nuclear Information System (INIS)

    Luria, Paolo; Aspinall, Peter A.

    2003-01-01

    The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based on a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)

  15. The Herfa-Neurode hazardous waste repository in bedded salt as an operating model for safe mixed waste disposal

    International Nuclear Information System (INIS)

    Rempe, N.T.

    1991-01-01

    For 18 years, The Herfa-Neurode underground repository has demonstrated the environmentally sound disposal of hazardous waste in a former potash mine. Its principal characteristics make it an excellent analogue to the Waste Isolation Pilot Plant (WIPP). The Environmental Protection Agency has ruled in its first conditional no-migration determination that is reasonably certain that no hazardous constituents of the mixed waste, destined for the WIPP during its test phase, will migrate from the site for up to ten years. Knowledge of and reference to the Herfa-Neurode operating model may substantially improve the no-migration variance petition for the WIPP's disposal phase and thereby expedite its approval. 2 refs., 1 fig., 1 tab

  16. LISREL Model Medical Solid Infectious Waste Hazardous Hospital Management In Medan City

    Science.gov (United States)

    Simarmata, Verawaty; Siahaan, Ungkap; Pandia, Setiaty; Mawengkang, Herman

    2018-01-01

    Hazardous and toxic waste resulting from activities at most hospitals contain various elements of medical solid waste ranging from heavy metals that have the nature of accumulative toxic which are harmful to human health. Medical waste in the form of gas, liquid or solid generally include the category or the nature of the hazard and toxicity waste. The operational in activities of the hospital aims to improve the health and well-being, but it also produces waste as an environmental pollutant waters, soil and gas. From the description of the background of the above in mind that the management of solid waste pollution control medical hospital, is one of the fundamental problems in the city of Medan and application supervision is the main business licensing and control alternatives in accordance with applicable regulations.

  17. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J

    2008-02-11

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.

  18. Hazard assessment of the Gschliefgraben earth flow (Austria) based on monitoring data and evolution modelling

    Science.gov (United States)

    Poisel, R.; Preh, A.; Hofmann, R.; Schiffer, M.; Sausgruber, Th.

    2009-04-01

    A rock slide on to the clayey - silty - sandy - pebbly masses in the Gschliefgraben (Upper Austria province, Lake Traunsee) having occurred in 2006 as well as the humid autumn of 2007 triggered an earth flow comprising a volume up to 5 mill m³ and moving with a maximum displacement velocity of 5 m/day during the winter of 2007-2008. The possible damage was estimated up to 60 mill € due to possible destruction of houses and of a road to a settlement with heavy tourism. Exploratory drillings revealed that the moving mass consists of an alternate bedding of thicker, less permeable clayey - silty layers and thinner, more permeable silty - sandy - pebbly layers. The movement front ran ahead in the creek bed. Therefore it was assumed that water played an important role and the earth flow moved due to soaking of water into the ground from the area of the rock slide downslope. Inclinometer measurements showed that the uppermost, less permeable layer was sliding on a thin, more permeable layer. The movement process was analysed by numerical models (FLAC) and by conventional calculations in order to assess the hazard. The coupled flow and mechanical models showed that sections of the less permeable layer soaked with water were sliding on the thin, more permeable layer due to excessive watering out of the more permeable layer. These sections were thrust over the downward lying, less soaked areas, therefore having higher strength. The material thrust over the downward lying, less soaked areas together with the moving front of pore water pressures caused the downward material to fail and to be thrust over the downslope lying material in a distance of some 50 m. Thus a cyclic process was created without any indication of a sudden sliding of the complete less permeable layer. Nevertheless, the inhabitants of 15 houses had to be evacuated for safety reasons. They could return to their homes after displacement velocities had decreased. Displacement monitoring by GPS showed that

  19. Modeling Flood Hazard Zones at the Sub-District Level with the Rational Model Integrated with GIS and Remote Sensing Approaches

    Directory of Open Access Journals (Sweden)

    Daniel Asare-Kyei

    2015-07-01

    Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.

  20. Incisors’ proportions in smile esthetics

    Science.gov (United States)

    Alsulaimani, Fahad F; Batwa, Waeil

    2013-01-01

    Aims: To determine whether alteration of the maxillary central and lateral incisors’ length and width, respectively, would affect perceived smile esthetics and to validate the most esthetic length and width, respectively, for the central and lateral incisors. Materials and Methods: Photographic manipulation was undertaken to produce two sets of photographs, each set of four photographs showing the altered width of the lateral incisor and length of the central length. The eight produced photographs were assessed by laypeople, dentists and orthodontists. Results: Alteration in the incisors’ proportion affected the relative smile attractiveness for laypeople (n=124), dentists (n=115) and orthodontists (n=68); dentists and orthodontists did not accept lateral width reduction of more than 0.5 mm (P<0.01), which suggests that the lateral to central incisor width ratio ranges from 54% to 62%. However, laypeople did not accept lateral width reduction of more than 1 mm (P<0.01), widening the range to be from 48% to 62%. All groups had zero tolerance for changes in central crown length (P<0.01). Conclusion: All participants recognized that the central incisors’ length changes. For lateral incisors, laypeople were more tolerant than dentists and orthodontists. This suggests that changing incisors’ proportions affects the relative smile attractiveness. PMID:24987650

  1. Landslide Hazard Assessment and Mapping in the Guil Catchment (Queyras, Southern French Alps): From Landslide Inventory to Susceptibility Modelling

    Science.gov (United States)

    Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique

    2016-04-01

    Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling

  2. Come rain or shine: Multi-model Projections of Climate Hazards affecting Transportation in the South Central United States

    Science.gov (United States)

    Mullens, E.; Mcpherson, R. A.

    2016-12-01

    This work develops detailed trends in climate hazards affecting the Department of Transportation's Region 6, in the South Central U.S. Firstly, a survey was developed to gather information regarding weather and climate hazards in the region from the transportation community, identifying key phenomena and thresholds to evaluate. Statistically downscaled datasets were obtained from the Multivariate Adaptive Constructed Analogues (MACA) project, and the Asynchronous Regional Regression Model (ARRM), for a total of 21 model projections, two coupled model intercomparisons (CMIP3, and CMIP5), and four emissions pathways (A1Fi, B1, RCP8.5, RCP4.5). Specific hazards investigated include winter weather, freeze-thaw cycles, hot and cold extremes, and heavy precipitation. Projections for each of these variables were calculated for the region, utilizing spatial mapping, and time series analysis at the climate division level. The results indicate that cold-season phenomena such as winter weather, freeze-thaw, and cold extremes, decrease in intensity and frequency, particularly with the higher emissions pathways. Nonetheless, specific model and downscaling method yields variability in magnitudes, with the most notable decreasing trends late in the 21st century. Hot days show a pronounced increase, particularly with greater emissions, producing annual mean 100oF day frequencies by late 21st century analogous to the 2011 heatwave over the central Southern Plains. Heavy precipitation, evidenced by return period estimates and counts-over-thresholds, also show notable increasing trends, particularly between the recent past through mid-21st Century. Conversely, mean precipitation does not show significant trends and is regionally variable. Precipitation hazards (e.g., winter weather, extremes) diverge between downscaling methods and their associated model samples much more substantially than temperature, suggesting that the choice of global model and downscaled data is particularly

  3. Hazard function theory for nonstationary natural hazards

    Science.gov (United States)

    Read, Laura K.; Vogel, Richard M.

    2016-04-01

    Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.

  4. Identification of natural hazards and classification of urban areas by TOPSIS model (case study: Bandar Abbas city, Iran

    Directory of Open Access Journals (Sweden)

    Rasool Mahdavi Najafabadi

    2016-01-01

    Full Text Available In this paper, among multi-criteria models for complex decision-making and multiple-attribute models for assigning the most preferable choice, the technique for order preference by similarity ideal solution (TOPSIS is implied. The main objective of this research is to identify potential natural hazards in Bandar Abbas city, Iran, using TOPSIS model, which is based on an analytical hierarchy process structure. A set of 12 relevant geomorphologic parameters, including earthquake frequency, distance from the earthquake epicentre, number of faults, flood, talus creep, landslide, land subsidence, tide, hurricane and tidal wave, dust storms with external source, wind erosion and sea level fluctuations are considered to quantify inputs of the model. The outputs of this study indicate that one region, among three assessed regions, has the maximum potential occurrence of natural hazards, while it has been urbanized at a greater rate compared to other regions. Furthermore, based on Delphi method, the earthquake frequency and the landslide are the most and the least dangerous phenomena, respectively.

  5. A spatial hazard model for cluster detection on continuous indicators of disease: application to somatic cell score.

    Science.gov (United States)

    Gay, Emilie; Senoussi, Rachid; Barnouin, Jacques

    2007-01-01

    Methods for spatial cluster detection dealing with diseases quantified by continuous variables are few, whereas several diseases are better approached by continuous indicators. For example, subclinical mastitis of the dairy cow is evaluated using a continuous marker of udder inflammation, the somatic cell score (SCS). Consequently, this study proposed to analyze spatialized risk and cluster components of herd SCS through a new method based on a spatial hazard model. The dataset included annual SCS for 34 142 French dairy herds for the year 2000, and important SCS risk factors: mean parity, percentage of winter and spring calvings, and herd size. The model allowed the simultaneous estimation of the effects of known risk factors and of potential spatial clusters on SCS, and the mapping of the estimated clusters and their range. Mean parity and winter and spring calvings were significantly associated with subclinical mastitis risk. The model with the presence of 3 clusters was highly significant, and the 3 clusters were attractive, i.e. closeness to cluster center increased the occurrence of high SCS. The three localizations were the following: close to the city of Troyes in the northeast of France; around the city of Limoges in the center-west; and in the southwest close to the city of Tarbes. The semi-parametric method based on spatial hazard modeling applies to continuous variables, and takes account of both risk factors and potential heterogeneity of the background population. This tool allows a quantitative detection but assumes a spatially specified form for clusters.

  6. Rapid SAR and GPS Measurements and Models for Hazard Science and Situational Awareness

    Science.gov (United States)

    Owen, S. E.; Yun, S. H.; Hua, H.; Agram, P. S.; Liu, Z.; Moore, A. W.; Rosen, P. A.; Simons, M.; Webb, F.; Linick, J.; Fielding, E. J.; Lundgren, P.; Sacco, G. F.; Polet, J.; Manipon, G.

    2016-12-01

    The Advanced Rapid Imaging and Analysis (ARIA) project for Natural Hazards is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR), Differential Global Positioning System (DGPS), SAR-based change detection, and image pixel tracking have recently become critical additions to our toolset for understanding and mapping the damage caused by earthquakes, volcanic eruptions, landslides, and floods. Analyses of these data sets are still largely handcrafted following each event and are not generated rapidly and reliably enough for response to natural disasters or for timely analysis of large data sets. The ARIA project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), has been capturing the knowledge applied to these responses and building it into an automated infrastructure to generate imaging products in near real-time that can improve situational awareness for disaster response. In addition, the ARIA project is developing the capabilities to provide automated imaging and analysis capabilities necessary to keep up with the imminent increase in raw data from geodetic imaging missions planned for launch by NASA, as well as international space agencies. We will present the progress we have made on automating the analysis of SAR data for hazard monitoring and response using data from Sentinel 1a/b as well as continuous GPS stations. Since the beginning of our project, our team has imaged events and generated response products for events around the world. These response products have enabled many conversations with those in the disaster response community

  7. User's manual of a computer code for seismic hazard evaluation for assessing the threat to a facility by fault model. SHEAT-FM

    International Nuclear Information System (INIS)

    Sugino, Hideharu; Onizawa, Kunio; Suzuki, Masahide

    2005-09-01

    To establish the reliability evaluation method for aged structural component, we developed a probabilistic seismic hazard evaluation code SHEAT-FM (Seismic Hazard Evaluation for Assessing the Threat to a facility site - Fault Model) using a seismic motion prediction method based on fault model. In order to improve the seismic hazard evaluation, this code takes the latest knowledge in the field of earthquake engineering into account. For example, the code involves a group delay time of observed records and an update process model of active fault. This report describes the user's guide of SHEAT-FM, including the outline of the seismic hazard evaluation, specification of input data, sample problem for a model site, system information and execution method. (author)

  8. A clinical trial design using the concept of proportional time using the generalized gamma ratio distribution.

    Science.gov (United States)

    Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S

    2017-11-20

    Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Planning ahead for asteroid and comet hazard mitigation, phase 1: parameter space exploration and scenario modeling

    Energy Technology Data Exchange (ETDEWEB)

    Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory

    2009-01-01

    The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.

  10. Modeling hydrologic and geomorphic hazards across post-fire landscapes using a self-organizing map approach

    Science.gov (United States)

    Friedel, Michael J.

    2011-01-01

    Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.

  11. Radiation hazards

    International Nuclear Information System (INIS)

    Rausch, L.

    1979-01-01

    On a scientific basis and with the aid of realistic examples, the author gives a popular introduction to an understanding and judgment of the public discussion over radiation hazards: Uses and hazards of X-ray examinations, biological radiation effects, civilisation risks in comparison, origins and explanation of radiation protection regulations. (orig.) [de

  12. Position-sensitive proportional counter

    International Nuclear Information System (INIS)

    Kopp, M.K.

    1980-01-01

    A position-sensitive proportional counter circuit uses a conventional (low-resistance, metal-wire anode) counter for spatial resolution of an ionizing event along the anode, which functions as an RC line. A pair of preamplifiers at the anode ends act as stabilized active-capacitance loads, each comprising a series-feedback, low-noise amplifier and a unity-gain, shunt-feedback amplifier whose output is connected through a feedback capacitor to the series-feedback amplifier input. The stabilized capacitance loading of the anode allows distributed RC-line position encoding and subsequent time difference decoding by sensing the difference in rise times of pulses at the anode ends where the difference is primarily in response to the distributed capacitance along the anode. This allows the use of lower resistance wire anodes for spatial radiation detection which simplifies the counter construction of handling of the anodes, and stabilizes the anode resistivity at high count rates (>10 6 counts/sec). (author)

  13. Variabilidad de las proporciones molares en poblaciones humanas: un abordaje empleando modelos del desarrollo y experimentales / Variability of molar proportions in human populations: insights from developmental models and experiments in mice

    Directory of Open Access Journals (Sweden)

    Lucas A. D´Addona

    2016-03-01

    experimental studies have not yet been integrated in the context of biological anthropology. In this sense, this paper proposes: a to evaluate the consistency between changes in molar proportions in human populations that exhibit a wide variation in tooth size and the predictions derived from an inhibitory cascade model of dental development, and b to analyze the effect of the systemic factors controlling the growth of the organism on the lower molars proportions using strains of Mus musculus. The crown area of the mandibular molars was estimated from the bucco-lingual and mesio-distal diameters. The results obtained for human populations showed that the pattern of variation in molar proportions is consistent with expectations derived from the inhibitory cascade model and that most groups exhibit a general trend towards molar size reduction in the antero-posterior direction. Also, a significant positive association between the total molar area and the molar ratios M2/M1 and M3/M1 was observed. In the experimental models, growth alteration caused by systemic factors (protein subnutrition and reduced growth hormone resulted in a reduction of total molar area associated with changes in molar proportions. The latter are consistent with an increased antero-posterior inhibition. Overall, these results suggest that alterations in the systemic factors that control molar area can produce changes in the proportion of activators and inhibitors, and contribute in turn to inter-population differentiation in molar proportions. Keywords: Inhibitory cascade model; dental size; experimental and comparative approach

  14. CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.

    1993-06-01

    CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.

  15. STakeholder-Objective Risk Model (STORM): Determining the aggregated risk of multiple contaminant hazards in groundwater well catchments

    Science.gov (United States)

    Enzenhoefer, R.; Binning, P. J.; Nowak, W.

    2015-09-01

    Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any point in time, which then affects the pumped quality upon transport through the aquifer. In such situations, estimating the overall risk is not trivial, and three key questions emerge: (1) How to aggregate the impacts from different contaminants and spill locations to an overall, cumulative impact on the value at risk? (2) How to properly account for the stochastic nature of spill events when converting the aggregated impact to a risk estimate? (3) How will the overall risk and subsequent decision making depend on stakeholder objectives, where stakeholder objectives refer to the values at risk, risk attitudes and risk metrics that can vary between stakeholders. In this study, we provide a STakeholder-Objective Risk Model (STORM) for assessing the total aggregated risk. Or concept is a quantitative, probabilistic and modular framework for simulation-based risk estimation. It rests on the source-pathway-receptor concept, mass-discharge-based aggregation of stochastically occuring spill events, accounts for uncertainties in the involved flow and transport models through Monte Carlo simulation, and can address different stakeholder objectives. We illustrate the application of STORM in a numerical test case inspired by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time.

  16. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    Science.gov (United States)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in

  17. Modelling and developing a decision-making process of hazard zone identification in ship power plants

    International Nuclear Information System (INIS)

    Podsiadlo, Antoni; Tarelko, Wieslaw

    2006-01-01

    The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carried out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants

  18. Modelling and developing a decision-making process of hazard zone identification in ship power plants

    Energy Technology Data Exchange (ETDEWEB)

    Podsiadlo, Antoni [Department of Engineering Sciences, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: topo@am.gdynia.pl; Tarelko, Wieslaw [Department of Engineering Sciences, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: tar@am.gdynia.pl

    2006-04-15

    The most dangerous places in ships are their power plants. Particularly, they are very unsafe for operators carried out various necessary operation and maintenance activities. For this reason, ship machinery should be designed to ensure the maximum safety for its operators. It is a very difficult task. Therefore, it could not be solved by means of conventional design methods, which are used for design of uncomplicated technical equipment. One of the possible ways of solving this problem is to provide appropriate tools, which allow us to take the operator's safety into account during a design process, especially at its early stages. A computer-aided system supporting design of safe ship power plants could be such a tool. This paper deals with developing process of a prototype of the computer-aided system for hazard zone identification in ship power plants.

  19. CAirTOX, An inter-media transfer model for assessing indirect exposures to hazardous air contaminants

    International Nuclear Information System (INIS)

    McKone, T.E.

    1994-01-01

    Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out

  20. Identifying hazardous alcohol consumption during pregnancy: implementing a research-based model in real life.

    Science.gov (United States)

    Göransson, Mona; Magnusson, Asa; Heilig, Markus

    2006-01-01

    It has been repeatedly demonstrated that hazardous alcohol use during pregnancy is rarely detected in regular antenatal care, and that detection can be markedly improved using systematic screening. A major challenge is to translate research-based strategies into regular antenatal care. Here, we examined whether a screening strategy using the Alcohol Use Disorder Test (AUDIT) and time-line follow-back (TLFB) could be implemented under naturalistic conditions and within available resources; and whether it would improve detection to the extent previously shown in a research context. Regular midwives at a large antenatal care clinic were randomized to receive brief training and then implement AUDIT and TLFB ("intervention"); or to a waiting-list control group continuing to deliver regular care ("control"). In the intervention-condition, AUDIT was used to collect data about alcohol use during the year preceding pregnancy, and TLFB to assess actual consumption during the first trimester. Data were collected from new admissions over 6 months. Drop out was higher among patients of the intervention group than control midwives, 14% (23/162) versus 0% (0/153), and ppregnancy i.e. AUDIT score 6 or higher (17%, 23/139), and patients with ongoing consumption exceeding 70 g/week and/or binge consumption according to TLFB (17%, 24/139), to a significantly higher degree than regular antenatal screening (0/162). The AUDIT- and TLFB-positive populations overlapped partially, with 36/139 subjects screening positive with either of the instrument and 11/139 were positive for both. We confirm previous findings that alcohol use during pregnancy is more extensive in Sweden than has generally been realized. Systematic screening using AUDIT and TLFB detects hazardous use in a manner which regular antenatal care does not. This remains true under naturalistic conditions, following minimal training of regular antenatal care staff, and can be achieved with minimal resources. The proposed

  1. Marine natural hazards in coastal zone: observations, analysis and modelling (Plinius Medal Lecture)

    Science.gov (United States)

    Didenkulova, Ira

    2010-05-01

    Giant surface waves approaching the coast frequently cause extensive coastal flooding, destruction of coastal constructions and loss of lives. Such waves can be generated by various phenomena: strong storms and cyclones, underwater earthquakes, high-speed ferries, aerial and submarine landslides. The most famous examples of such events are the catastrophic tsunami in the Indian Ocean, which occurred on 26 December 2004 and hurricane Katrina (28 August 2005) in the Atlantic Ocean. The huge storm in the Baltic Sea on 9 January 2005, which produced unexpectedly long waves in many areas of the Baltic Sea and the influence of unusually high surge created by long waves from high-speed ferries, should also be mentioned as examples of regional marine natural hazards connected with extensive runup of certain types of waves. The processes of wave shoaling and runup for all these different marine natural hazards (tsunami, coastal freak waves, ship waves) are studied based on rigorous solutions of nonlinear shallow-water theory. The key and novel results presented here are: i) parameterization of basic formulas for extreme runup characteristics for bell-shape waves, showing that they weakly depend on the initial wave shape, which is usually unknown in real sea conditions; ii) runup analysis of periodic asymmetric waves with a steep front, as such waves are penetrating inland over large distances and with larger velocities than symmetric waves; iii) statistical analysis of irregular wave runup demonstrating that wave nonlinearity nearshore does not influence on the probability distribution of the velocity of the moving shoreline and its moments, and influences on the vertical displacement of the moving shoreline (runup). Wave runup on convex beaches and in narrow bays, which allow abnormal wave amplification is also discussed. Described analytical results are used for explanation of observed extreme runup of tsunami, freak (sneaker) waves and ship waves on different coasts

  2. Divine proportions in attractive and nonattractive faces.

    Science.gov (United States)

    Pancherz, Hans; Knapp, Verena; Erbe, Christina; Heiss, Anja Melina

    2010-01-01

    To test Ricketts' 1982 hypothesis that facial beauty is measurable by comparing attractive and nonattractive faces of females and males with respect to the presence of the divine proportions. The analysis of frontal view facial photos of 90 cover models (50 females, 40 males) from famous fashion magazines and of 34 attractive (29 females, five males) and 34 nonattractive (13 females, 21 males) persons selected from a group of former orthodontic patients was carried out in this study. Based on Ricketts' method, five transverse and seven vertical facial reference distances were measured and compared with the corresponding calculated divine distances expressed in phi-relationships (f=1.618). Furthermore, transverse and vertical facial disproportion indices were created. For both the models and patients, all the reference distances varied largely from respective divine values. The average deviations ranged from 0.3% to 7.8% in the female groups of models and attractive patients with no difference between them. In the male groups of models and attractive patients, the average deviations ranged from 0.2% to 11.2%. When comparing attractive and nonattractive female, as well as male, patients, deviations from the divine values for all variables were larger in the nonattractive sample. Attractive individuals have facial proportions closer to the divine values than nonattractive ones. In accordance with the hypothesis of Ricketts, facial beauty is measurable to some degree. COPYRIGHT © 2009 BY QUINTESSENCE PUBLISHING CO, INC.

  3. Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps

    Science.gov (United States)

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.

    2014-01-01

    The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.

  4. High-resolution marine flood modelling coupling overflow and overtopping processes: framing the hazard based on historical and statistical approaches

    Science.gov (United States)

    Nicolae Lerma, Alexandre; Bulteau, Thomas; Elineau, Sylvain; Paris, François; Durand, Paul; Anselme, Brice; Pedreros, Rodrigo

    2018-01-01

    A modelling chain was implemented in order to propose a realistic appraisal of the risk in coastal areas affected by overflowing as well as overtopping processes. Simulations are performed through a nested downscaling strategy from regional to local scale at high spatial resolution with explicit buildings, urban structures such as sea front walls and hydraulic structures liable to affect the propagation of water in urban areas. Validation of the model performance is based on hard and soft available data analysis and conversion of qualitative to quantitative information to reconstruct the area affected by flooding and the succession of events during two recent storms. Two joint probability approaches (joint exceedance contour and environmental contour) are used to define 100-year offshore conditions scenarios and to investigate the flood response to each scenario in terms of (1) maximum spatial extent of flooded areas, (2) volumes of water propagation inland and (3) water level in flooded areas. Scenarios of sea level rise are also considered in order to evaluate the potential hazard evolution. Our simulations show that for a maximising 100-year hazard scenario, for the municipality as a whole, 38 % of the affected zones are prone to overflow flooding and 62 % to flooding by propagation of overtopping water volume along the seafront. Results also reveal that for the two kinds of statistic scenarios a difference of about 5 % in the forcing conditions (water level, wave height and period) can produce significant differences in terms of flooding like +13.5 % of water volumes propagating inland or +11.3 % of affected surfaces. In some areas, flood response appears to be very sensitive to the chosen scenario with differences of 0.3 to 0.5 m in water level. The developed approach enables one to frame the 100-year hazard and to characterize spatially the robustness or the uncertainty over the results. Considering a 100-year scenario with mean sea level rise (0.6 m), hazard

  5. A Conceptual Model of Future Volcanism at Medicine Lake Volcano, California - With an Emphasis on Understanding Local Volcanic Hazards

    Science.gov (United States)

    Molisee, D. D.; Germa, A.; Charbonnier, S. J.; Connor, C.

    2017-12-01

    Medicine Lake Volcano (MLV) is most voluminous of all the Cascade Volcanoes ( 600 km3), and has the highest eruption frequency after Mount St. Helens. Detailed mapping by USGS colleagues has shown that during the last 500,000 years MLV erupted >200 lava flows ranging from basalt to rhyolite, produced at least one ash-flow tuff, one caldera forming event, and at least 17 scoria cones. Underlying these units are 23 additional volcanic units that are considered to be pre-MLV in age. Despite the very high likelihood of future eruptions, fewer than 60 of 250 mapped volcanic units (MLV and pre-MLV) have been dated reliably. A robust set of eruptive ages is key to understanding the history of the MLV system and to forecasting the future behavior of the volcano. The goals of this study are to 1) obtain additional radiometric ages from stratigraphically strategic units; 2) recalculate recurrence rate of eruptions based on an augmented set of radiometric dates; and 3) use lava flow, PDC, ash fall-out, and lahar computational simulation models to assess the potential effects of discrete volcanic hazards locally and regionally. We identify undated target units (units in key stratigraphic positions to provide maximum chronological insight) and obtain field samples for radiometric dating (40Ar/39Ar and K/Ar) and petrology. Stratigraphic and radiometric data are then used together in the Volcano Event Age Model (VEAM) to identify changes in the rate and type of volcanic eruptions through time, with statistical uncertainty. These newly obtained datasets will be added to published data to build a conceptual model of volcanic hazards at MLV. Alternative conceptual models, for example, may be that the rate of MLV lava flow eruptions are nonstationary in time and/or space and/or volume. We explore the consequences of these alternative models on forecasting future eruptions. As different styles of activity have different impacts, we estimate these potential effects using simulation

  6. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    Science.gov (United States)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  7. Hazardous materials

    Science.gov (United States)

    ... substances that could harm human health or the environment. Hazardous means dangerous, so these materials must be ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...

  8. ''Hazardous'' terminology

    International Nuclear Information System (INIS)

    Powers, J.

    1991-01-01

    A number of terms (e.g., ''hazardous chemicals,'' ''hazardous materials,'' ''hazardous waste,'' and similar nomenclature) refer to substances that are subject to regulation under one or more federal environmental laws. State laws and regulations also provide additional, similar, or identical terminology that may be confused with the federally defined terms. Many of these terms appear synonymous, and it easy to use them interchangeably. However, in a regulatory context, inappropriate use of narrowly defined terms can lead to confusion about the substances referred to, the statutory provisions that apply, and the regulatory requirements for compliance under the applicable federal statutes. This information Brief provides regulatory definitions, a brief discussion of compliance requirements, and references for the precise terminology that should be used when referring to ''hazardous'' substances regulated under federal environmental laws. A companion CERCLA Information Brief (EH-231-004/0191) addresses ''toxic'' nomenclature

  9. The impact of hazardous industrial facilities on housing prices: A comparison of parametric and semiparametric hedonic price models

    DEFF Research Database (Denmark)

    Grislain-Letrémy, Céline; Katossky, Arthur

    2014-01-01

    The willingness of households to pay for prevention against industrial risks can be revealed by real estate markets. By using very rich microdata, we study housing prices in the vicinity of hazardous industries near three important French cities. We show that the impact of hazardous plants...... to important biases in the estimated value of the impact of hazardous plants on housing values....

  10. Computer Models Used to Support Cleanup Decision Making at Hazardous and Radioactive Waste Sites

    Science.gov (United States)

    This report is a product of the Interagency Environmental Pathway Modeling Workgroup. This report will help bring a uniform approach to solving environmental modeling problems common to site remediation and restoration efforts.

  11. Site characterization and modeling to estimate movement of hazardous materials in groundwater

    International Nuclear Information System (INIS)

    Ditmars, J.D.

    1988-01-01

    A quantitative approach for evaluating the effectiveness of site characterization measurement activities is developed and illustrated with an example application to hypothetical measurement schemes at a potential geologic repository site for radioactive waste. The method is a general one and could also be applied at sites for underground disposal of hazardous chemicals. The approach presumes that measurements will be undertaken to support predictions of the performance of some aspect of a constructed facility or natural system. It requires a quantitative performance objective, such as groundwater travel time or contaminant concentration, against which to compare predictions of performance. The approach recognizes that such predictions are uncertain because the measurements upon which they are based are uncertain. The effectiveness of measurement activities is quantified by a confidence index, β, that reflects the number of standard deviations separating the best estimate of performance from the perdetermined performance objective. Measurements that reduce the uncertainty in predictions lead to increased values of β. The link between measurement and prediction uncertainties, required for the evaluation of β for a particular measurement scheme, identifies the measured quantities that significantly affect prediction uncertainty. The components of uncertainty in those key measurements are spatial variation, noise, estimation error, and measurement bias. 7 refs., 4 figs

  12. Hazardous Chemicals

    Centers for Disease Control (CDC) Podcasts

    Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure.

  13. Welding hazards

    International Nuclear Information System (INIS)

    Khan, M.A.

    1992-01-01

    Welding technology is advancing rapidly in the developed countries and has converted into a science. Welding involving the use of electricity include resistance welding. Welding shops are opened in residential area, which was causing safety hazards, particularly the teenagers and children who eagerly see the welding arc with their naked eyes. There are radiation hazards from ultra violet rays which irritate the skin, eye irritation. Welding arc light of such intensity could damage the eyes. (Orig./A.B.)

  14. Performance of Models for Flash Flood Warning and Hazard Assessment: The 2015 Kali Gandaki Landslide Dam Breach in Nepal

    Directory of Open Access Journals (Sweden)

    Jeremy D. Bricker

    2017-02-01

    Full Text Available The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER, as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.

  15. Introduction: Hazard mapping

    Science.gov (United States)

    Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M

    2014-01-01

    Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.

  16. Development of a tornado wind speed hazard model for limited area (TOWLA) for nuclear power plants at a coastline

    International Nuclear Information System (INIS)

    Hirakuchi, Hiromaru; Nohara, Daisuke; Sugimoto, Soichiro; Eguchi, Yuzuru; Hattori, Yasuo

    2016-01-01

    It is necessary for Japanese electric power companies to assess tornado risks on the nuclear power plants according to a new regulation in 2013. The new regulatory guide recommends to select a long narrow strip area along a coast line with the width of 5 km to the seaward and landward sides as a target area of tornado risk assessment, because most of Japanese tornados have been reported near the coast line, where all of Japanese nuclear power plants are located. However, it is very difficult to evaluate a tornado hazard along a coast line, because there is no available information of F-scale and damage length/width on tornadic waterspouts. The purpose of this study is to propose a new tornado wind hazard model for limited area (TOWLA), which can be apply to a long narrow strip area along a coastline. In order to consider tornadic waterspouts moved inland, we evaluate the number of waterspouts entering/passing the targeting area, and add them to the total number of the tornado occurred in the area. A characteristic of the model is to use 'segment lengths' instead of damage lengths. The segment length is a part of the tornado foot print in the long narrow strip area. We show two methods for segment length computation. One is based on tornado records; latitude and longitude of tornado genesis and dissipation locations. The other is to compute the expected segment length based on the geometrical relationship among the damage length, area width, and directional characteristics of tornado movement. The new model can also consider the variation of tornado intensity along the path length and across the path width. (author)

  17. Hazard assessment and risk management of offshore production chemicals

    International Nuclear Information System (INIS)

    Schobben, H.P.M.; Scholten, M.C.T.; Vik, E.A.; Bakke, S.

    1994-01-01

    There is a clear need for harmonization of the regulations with regard to the use and discharge of drilling and production chemicals in the North Sea. Therefore the CHARM (Chemical Hazard Assessment and Risk Management) model was developed. Both government (of several countries) and industry (E and P and chemical suppliers) participated in the project. The CHARM model is discussed and accepted by OSPARCON. The CHARM model consists of several modules. The model starts with a prescreening on the basis of hazardous properties like persistency, accumulation potential and the appearance on black lists. The core of the model.consists of modules for hazard assessment and risk analysis. Hazard assessment covers a general environmental evaluation of a chemical on the basis of intrinsic properties of that chemical. Risk analysis covers a more specific evaluation of the environmental impact from the use of a production chemical, or a combination of chemicals, under actual conditions. In the risk management module the user is guided to reduce the total risk of all chemicals used on a platform by the definition of measures in the most cost-effective way. The model calculates the environmental impact for the marine environment. Thereto three parts are distinguished: pelagic, benthic and food chain. Both hazard assessment and risk analysis are based on a proportional comparison of an estimated PEC with an estimated NEC. The PEC is estimated from the use, release, dilution and fate of the chemical and the NEC is estimated from the available toxicity data of the chemicals

  18. U.S. Department of Energy Workers' mental models of radiation and chemical hazards in the workplace

    International Nuclear Information System (INIS)

    Quadrel, M.J.; Blanchard, K.A.; Lundgren, R.E.; McMakin, A.H.; Mosley, M.T.; Strom, D.J.

    1994-05-01

    A pilot study was performed to test the mental models methodology regarding knowledge and perceptions of U.S. Department of Energy contractor radiation workers about ionizing radiation and hazardous chemicals. The mental models methodology establishes a target population's beliefs about risks and compares them with current scientific knowledge. The ultimate intent is to develop risk communication guidelines that address information gaps or misperceptions that could affect decisions and behavior. In this study, 15 radiation workers from the Hanford Site in Washington State were interviewed about radiation exposure processes and effects. Their beliefs were mapped onto a science model of the same topics to see where differences occurred. In general, workers' mental models covered many of the high-level parts of the science model but did not have the same level of detail. The following concepts appeared to be well understood by most interviewees: types, form, and properties of workplace radiation; administrative and physical controls to reduce radiation exposure risk; and the relationship of dose and effects. However, several concepts were rarely mentioned by most interviewees, indicating potential gaps in worker understanding. Most workers did not discuss the wide range of measures for neutralizing or decontaminating individuals following internal contamination. Few noted specific ways of measuring dose or factors that affect dose. Few mentioned the range of possible effects, including genetic effects, birth defects, or high dose effects. Variables that influence potential effects were rarely discussed. Workers rarely mentioned how basic radiation principles influenced the source, type, or mitigation of radiation risk in the workplace

  19. A Coupled Damage and Reaction Model for Simulating Energetic Material Response to Impact Hazards

    International Nuclear Information System (INIS)

    BAER, MELVIN R.; DRUMHELLER, D.S.; MATHESON, E.R.

    1999-01-01

    The Baer-Nunziato multiphase reactive theory for a granulated bed of energetic material is extended to allow for dynamic damage processes, that generate new surfaces as well as porosity. The Second Law of Thermodynamics is employed to constrain the constitutive forms of the mass, momentum, and energy exchange functions as well as those for the mechanical damage model ensuring that the models will be dissipative. The focus here is on the constitutive forms of the exchange functions. The mechanical constitutive modeling is discussed in a companion paper. The mechanical damage model provides dynamic surface area and porosity information needed by the exchange functions to compute combustion rates and interphase momentum and energy exchange rates. The models are implemented in the CTH shock physics code and used to simulate delayed detonations due to impacts in a bed of granulated energetic material and an undamaged cylindrical sample

  20. Great paleoearthquakes of the central Himalaya and their implications for seismotectonic models and seismic hazard assessment

    Science.gov (United States)

    Yule, D.; Lave, J.; Kumar, S.; Wesnousky, S.

    2007-12-01

    Himalaya in over 500 years and that Mw 7.5-8.4 earthquakes are the 'moderate' earthquakes'. Further study to constrain the lateral extent and recurrence of the great paleoearthquakes of the central Himalaya is critical to answer important questions about the Himalaya earthquake cycle and the seismic hazard facing the rapidly urbanizing population of the region.

  1. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  2. First approaches towards modelling glacial hazards in the Mount Cook region of New Zealand's Southern Alps

    Directory of Open Access Journals (Sweden)

    S. K. Allen

    2009-03-01

    Full Text Available Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.

  3. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems

    Science.gov (United States)

    Witte, L.

    2014-06-01

    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  4. Climate change impact assessment on Veneto and Friuli plain groundwater. Part I: An integrated modeling approach for hazard scenario construction

    International Nuclear Information System (INIS)

    Baruffi, F.; Cisotto, A.; Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M.; Pretner, A.; Galli, A.; Scarinci, A.; Marsala, V.; Panelli, C.; Gualdi, S.; Bucchignani, E.; Torresan, S.; Pasini, S.; Critto, A.

    2012-01-01

    Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961–1990 and the projection period 2010–2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071–2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble

  5. Climate change impact assessment on Veneto and Friuli plain groundwater. Part I: An integrated modeling approach for hazard scenario construction

    Energy Technology Data Exchange (ETDEWEB)

    Baruffi, F. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cisotto, A., E-mail: segreteria@adbve.it [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Pretner, A.; Galli, A. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Scarinci, A., E-mail: andrea.scarinci@sgi-spa.it [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Marsala, V.; Panelli, C. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Gualdi, S., E-mail: silvio.gualdi@bo.ingv.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Bucchignani, E., E-mail: e.bucchignani@cira.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Torresan, S., E-mail: torresan@cmcc.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Pasini, S., E-mail: sara.pasini@stud.unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); Critto, A., E-mail: critto@unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); and others

    2012-12-01

    Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced

  6. Integrated satellite InSAR and slope stability modeling to support hazard assessment at the Safuna Alta glacial lake, Peru

    Science.gov (United States)

    Cochachin, Alejo; Frey, Holger; Huggel, Christian; Strozzi, Tazio; Büechi, Emanuel; Cui, Fanpeng; Flores, Andrés; Saito, Carlos

    2017-04-01

    The Safuna glacial lakes (77˚ 37' W, 08˚ 50' S) are located in the headwater of the Tayapampa catchment, in the northernmost part of the Cordillera Blanca, Peru. The upper lake, Laguna Safuna Alta at 4354 m asl has formed in the 1960s behind a terminal moraine of the retreating Pucajirca Glacier, named after the peak south of the lakes. Safuna Alta currently has a volume of 15 x 106 m3. In 2002 a rock fall of several million m3 from the proximal left lateral moraine hit the Safuna Alta lake and triggered an impact wave which overtopped the moraine dam and passed into the lower lake, Laguna Safuna Baja, which absorbed most of the outburst flood from the upper lake, but nevertheless causing loss in cattle, degradation of agricultural land downstream and damages to a hydroelectric power station in Quitaracsa gorge. Event reconstructions showed that the impact wave in the Safuna Alta lake had a runup height of 100 m or more, and weakened the moraine dam of Safuna Alta. This fact, in combination with the large lake volumes and the continued possibility for landslides from the left proximal moraine pose a considerable risk for the downstream settlements as well as the recently completed Quitaracsa hydroelectric power plant. In the framework of a project funded by the European Space Agency (ESA), the hazard situation at the Safuna Alta lake is assessed by a combination of satellite radar data analysis, field investigations, and slope stability modeling. Interferometric analyses of the Synthetic Aperture Radar (InSAR) of ALOS-1 Palsar-1, ALOS-2 Palsar-2 and Sentinel-1 data from 2016 reveal terrain displacements of 2 cm y-1 in the detachment zone of the 2002 rock avalanche. More detailed insights into the characteristics of these terrain deformations are gained by repeat surveys with differential GPS (DGPS) and tachymetric measurements. A drone flight provides the information for the generation of a high-resolution digital elevation model (DEM), which is used for the

  7. Hazardous Chemicals

    Centers for Disease Control (CDC) Podcasts

    2007-04-10

    Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure.  Created: 4/10/2007 by CDC National Center for Environmental Health.   Date Released: 4/13/2007.

  8. Kalman-predictive-proportional-integral-derivative (KPPID)

    International Nuclear Information System (INIS)

    Fluerasu, A.; Sutton, M.

    2004-01-01

    With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.

  9. Proportional-Integral-Resonant AC Current Controller

    Directory of Open Access Journals (Sweden)

    STOJIC, D.

    2017-02-01

    Full Text Available In this paper an improved stationary-frame AC current controller based on the proportional-integral-resonant control action (PIR is proposed. Namely, the novel two-parameter PIR controller is applied in the stationary-frame AC current control, accompanied by the corresponding parameter-tuning procedure. In this way, the proportional-resonant (PR controller, common in the stationary-frame AC current control, is extended by the integral (I action in order to enable the AC current DC component tracking, and, also, to enable the DC disturbance compensation, caused by the voltage source inverter (VSI nonidealities and by nonlinear loads. The proposed controller parameter-tuning procedure is based on the three-phase back-EMF-type load, which corresponds to a wide range of AC power converter applications, such as AC motor drives, uninterruptible power supplies, and active filters. While the PIR controllers commonly have three parameters, the novel controller has two. Also, the provided parameter-tuning procedure needs only one parameter to be tuned in relation to the load and power converter model parameters, since the second controller parameter is directly derived from the required controller bandwidth value. The dynamic performance of the proposed controller is verified by means of simulation and experimental runs.

  10. Evaluating Middle Years Students' Proportional Reasoning

    Science.gov (United States)

    Hilton, Annette; Dole, Shelley; Hilton, Geoff; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is a key aspect of numeracy that is not always developed naturally by students. Understanding the types of proportional reasoning that students apply to different problem types is a useful first step to identifying ways to support teachers and students to develop proportional reasoning in the classroom. This paper describes…

  11. The egg-sharing model for human therapeutic cloning research: managing donor selection criteria, the proportion of shared oocytes allocated to research, and amount of financial subsidy given to the donor.

    Science.gov (United States)

    Heng, Boon Chin; Tong, Guo Qing; Stojkovic, Miodrag

    2006-01-01

    Recent advances in human therapeutic cloning made by Hwang and colleagues have opened up new avenues of therapy for various human diseases. However, the major bottleneck of this new technology is the severe shortage of human donor oocytes. Egg-sharing in return for subsidized fertility treatment has been suggested as an ethically justifiable and practical solution to overcome the shortage of donor oocytes for therapeutic cloning. Because the utilization of shared oocytes in therapeutic cloning research does not result in any therapeutic benefit to a second party, this would necessitate a different management strategy compared to their use for the assisted conception of infertile women who are unable to produce any oocytes of their own. It is proposed that the pool of prospective egg-sharers in therapeutic cloning research be limited only to younger women (below 30 years of age) with indications for either male partner sub-fertility or tubal blockage. With regards to the proportion of the shared gametes being allocated to research, a threshold number of retrieved oocytes should be set that if not exceeded, would result in the patient being automatically removed from the egg-sharing scheme. Any excess supernumerary oocyte above this threshold number can be contributed to science, and allocation should be done in a randomized manner. Perhaps, a total of 10 retrieved oocytes from the patient may be considered a suitable threshold, since the chances of conception are unlikely to be impaired. With regards to the amount of subsidy being given to the patient, it is suggested that the proportion of financial subsidy should be equal to the proportion of the patient's oocytes being allocated to research. No doubt, the promise of future therapeutic benefit may be offered to the patient instead of financial subsidy. However, this is ethically controversial because therapeutic cloning has not yet been demonstrated to be a viable model of clinical therapy and any promises made to

  12. EFSA Panel on Biological Hazards (BIOHAZ); Scientific Opinion on Reflecting on the experiences and lessons learnt from modelling on biological hazards

    DEFF Research Database (Denmark)

    Hald, Tine

    methodological uncertainties, and therefore, preferences for types of models cannot be specified. Newer approaches need to be identified and considered. Fit for purpose and simplicity are key issues when developing QMRA models. However, limits on time and resources may restrict the model selection. At the start......” should be used carefully, with scientific criteria and context clearly defined, or avoided....

  13. Models for recurrent gas release event behavior in hazardous waste tanks

    International Nuclear Information System (INIS)

    Anderson, D.N.; Arnold, B.C.

    1994-08-01

    Certain radioactive waste storage tanks at the United States Department of Energy Hanford facilities continuously generate gases as a result of radiolysis and chemical reactions. The congealed sludge in these tanks traps the gases and causes the level of the waste within the tanks to rise. The waste level continues to rise until the sludge becomes buoyant and ''rolls over'', changing places with heavier fluid on top. During a rollover, the trapped gases are released, resulting, in a sudden drop in the waste level. This is known as a gas release event (GRE). After a GRE, the wastes leading to another GRE. We present nonlinear time waste re-congeals and gas again accumulates leading to another GRE. We present nonlinear time series models that produce simulated sample paths that closely resemble the temporal history of waste levels in these tanks. The models also imitate the random GRE, behavior observed in the temporal waste level history of a storage tank. We are interested in using the structure of these models to understand the probabilistic behavior of the random variable ''time between consecutive GRE's''. Understanding the stochastic nature of this random variable is important because the hydrogen and nitrous oxide gases released from a GRE, are flammable and the ammonia that is released is a health risk. From a safety perspective, activity around such waste tanks should be halted when a GRE is imminent. With credible GRE models, we can establish time windows in which waste tank research and maintenance activities can be safely performed

  14. Life-Stage Physiologically-Based Pharmacokinetic (PBPK) Model Applications to Screen Environmental Hazards.

    Science.gov (United States)

    This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...

  15. Hazard rate model and statistical analysis of a compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2005-01-01

    Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005

  16. Development of a high-fidelity numerical model for hazard prediction in the urban environment

    International Nuclear Information System (INIS)

    Lien, F.S.; Yee, E.; Ji, H.; Keats, A.; Hsieh, K.J.

    2005-01-01

    The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition, and contamination are emerging threats in an uncertain world. The transport, dispersion, deposition, and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent's movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada's more broadly based effort at advancing counter-terrorism planning and operational capabilities. The objective of this paper is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN materials. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties, and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident based on a pre-determined decision making framework. (author)

  17. A dynamic approach for the impact of a toxic gas dispersion hazard considering human behaviour and dispersion modelling.

    Science.gov (United States)

    Lovreglio, Ruggiero; Ronchi, Enrico; Maragkos, Georgios; Beji, Tarek; Merci, Bart

    2016-11-15

    The release of toxic gases due to natural/industrial accidents or terrorist attacks in populated areas can have tragic consequences. To prevent and evaluate the effects of these disasters different approaches and modelling tools have been introduced in the literature. These instruments are valuable tools for risk managers doing risk assessment of threatened areas. Despite the significant improvements in hazard assessment in case of toxic gas dispersion, these analyses do not generally include the impact of human behaviour and people movement during emergencies. This work aims at providing an approach which considers both modelling of gas dispersion and evacuation movement in order to improve the accuracy of risk assessment for disasters involving toxic gases. The approach is applied to a hypothetical scenario including a ship releasing Nitrogen dioxide (NO2) on a crowd attending a music festival. The difference between the results obtained with existing static methods (people do not move) and a dynamic approach (people move away from the danger) which considers people movement with different degrees of sophistication (either a simple linear path or more complex behavioural modelling) is discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Analysis of risk indicators and issues associated with applications of screening model for hazardous and radioactive waste sites

    International Nuclear Information System (INIS)

    Buck, J.W.; Strenge, D.L.; Droppo, J.G. Jr.

    1990-12-01

    Risk indicators, such as population risk, maximum individual risk, time of arrival of contamination, and maximum water concentrations, were analyzed to determine their effect on results from a screening model for hazardous and radioactive waste sites. The analysis of risk indicators is based on calculations resulting from exposure to air and waterborne contamination predicted with Multimedia Environmental Pollutant Assessment System (MEPAS) model. The different risk indicators were analyzed, based on constituent type and transport and exposure pathways. Three of the specific comparisons that were made are (1) population-based versus maximum individual-based risk indicators, (2) time of arrival of contamination, and (3) comparison of different threshold assumptions for noncarcinogenic impacts. Comparison of indicators for population- and maximum individual-based human health risk suggests that these two parameters are highly correlated, but for a given problem, one may be more important than the other. The results indicate that the arrival distribution for different levels of contamination reaching a receptor can also be helpful in decisions regarding the use of resources for remediating short- and long-term environmental problems. The addition of information from a linear model for noncarcinogenic impacts allows interpretation of results below the reference dose (RfD) levels that might help in decisions for certain applications. The analysis of risk indicators suggests that important information may be lost by the use of a single indicator to represent public health risk and that multiple indicators should be considered. 15 refs., 8 figs., 1 tab

  19. Comparison of 2D numerical models for river flood hazard assessment: simulation of the Secchia River flood in January, 2014

    Science.gov (United States)

    Shustikova, Iuliia; Domeneghetti, Alessio; Neal, Jeffrey; Bates, Paul; Castellarin, Attilio

    2017-04-01

    Hydrodynamic modeling of inundation events still brings a large array of uncertainties. This effect is especially evident in the models run for geographically large areas. Recent studies suggest using fully two-dimensional (2D) models with high resolution in order to avoid uncertainties and limitations coming from the incorrect interpretation of flood dynamics and an unrealistic reproduction of the terrain topography. This, however, affects the computational efficiency increasing the running time and hardware demands. Concerning this point, our study evaluates and compares numerical models of different complexity by testing them on a flood event that occurred in the basin of the Secchia River, Northern Italy, on 19th January, 2014. The event was characterized by a levee breach and consequent flooding of over 75 km2 of the plain behind the dike within 48 hours causing population displacement, one death and economic losses in excess of 400 million Euro. We test the well-established TELEMAC 2D, and LISFLOOD-FP codes, together with the recently launched HEC-RAS 5.0.3 (2D model), all models are implemented using different grid size (2-200 m) based on the 1 m digital elevation model resolution. TELEMAC is a fully 2D hydrodynamic model which is based on the finite-element or finite-volume approach. Whereas HEC-RAS 5.0.3 and LISFLOOD-FP are both coupled 1D-2D models. All models are calibrated against observed inundation extent and maximum water depths, which are retrieved from remotely sensed data and field survey reports. Our study quantitatively compares the three modeling strategies highlighting differences in terms of the ease of implementation, accuracy of representation of hydraulic processes within floodplains and computational efficiency. Additionally, we look into the different grid resolutions in terms of the results accuracy and computation time. Our study is a preliminary assessment that focuses on smaller areas in order to identify potential modeling schemes

  20. Experimental and Numerical Modelling of CO2 Atmospheric Dispersion in Hazardous Gas Emission Sites.

    Science.gov (United States)

    Gasparini, A.; sainz Gracia, A. S.; Grandia, F.; Bruno, J.

    2015-12-01

    Under stable atmospheric conditions and/or in presence of topographic depressions, CO2 concentrations can reach high values resulting in lethal effect to living organisms. The distribution of denser than air gases released from the underground is governed by gravity, turbulence and dispersion. Once emitted, the gas distribution is initially driven by buoyancy and a gas cloud accumulates on the ground (gravitational phase); with time the density gradient becomes less important due to dispersion or mixing and gas distribution is mainly governed by wind and atmospheric turbulence (passive dispersion phase). Natural analogues provide evidences of the impact of CO2 leakage. Dangerous CO2 concentration in atmosphere related to underground emission have been occasionally reported although the conditions favouring the persistence of such a concentration are barely studied.In this work, the dynamics of CO2 in the atmosphere after ground emission is assessed to quantify their potential risk. Two approaches have been followed: (1) direct measurement of air concentration in a natural emission site, where formation of a "CO2 lake" is common and (2) numerical atmospheric modelling. Two sites with different morphology were studied: (a) the Cañada Real site, a flat terrain in the Volcanic Field of Campo de Calatrava (Spain); (b) the Solforata di Pomezia site, a rough terrain in the Alban Hills Volcanic Region (Italy). The comparison between field data and model calculations reveal that numerical dispersion models are capable of predicting the formation of CO2 accumulation over the ground as a consequence of underground gas emission. Therefore, atmospheric modelling could be included as a valuable methodology in the risk assessment of leakage in natural degassing systems and in CCS projects. Conclusions from this work provide clues on whether leakage may be a real risk for humans and under which conditions this risk needs to be included in the risk assessment.

  1. Modelling and assessment of urban flood hazards based on rainfall intensity-duration-frequency curves reformation

    OpenAIRE

    Ghazavi, Reza; Moafi Rabori, Ali; Ahadnejad Reveshty, Mohsen

    2016-01-01

    Estimate design storm based on rainfall intensity–duration–frequency (IDF) curves is an important parameter for hydrologic planning of urban areas. The main aim of this study was to estimate rainfall intensities of Zanjan city watershed based on overall relationship of rainfall IDF curves and appropriate model of hourly rainfall estimation (Sherman method, Ghahreman and Abkhezr method). Hydrologic and hydraulic impacts of rainfall IDF curves change in flood properties was evaluated via Stormw...

  2. Do Female Researchers Face a Glass Ceiling in France? A Hazard Model of Promotions

    OpenAIRE

    Sabatier, Mareva

    2010-01-01

    Abstract The present article examines whether French female researchers face a glass ceiling, an invisible barrier to promotion. Using an original database from the National Institute for Agricultural Research, we estimate duration models for promotions. The methodology used allowed us to take into account censored observations and unobserved heterogeneity. Our results show a significant gender effect that does not contradict the glass-ceiling hypothesis. In addition, factors that ...

  3. The Resource Hazards Model for the Critical Infrastructure of the State Emergency Management Process

    Directory of Open Access Journals (Sweden)

    Ostrowska Teresa

    2014-08-01

    Full Text Available This paper presents an investigation of the relevant factors related to the construction of a resource model which is designed to be useful in the management processes of the operation of critical infrastructure (CI for state emergencies. The genesis of the research lay in the perceived need for effective protection of multidimensional CI methodologies, and it was influenced by the nature of the physical characteristics of the available resources. It was necessary to establish a clear structure and well defined objectives and to assess the functional and structural resources required, as well as the potential relational susceptibilities deriving from a number of possible threats and the possible seriousness of a specific range of incidents and their possible consequences. The interdependence of CI stocks is shown by the use of tables of resource classes. The dynamics of the interaction of CI resources are modeled by examining how using clusters of potential risks can at any given time create a class of compounds related to susceptibilities and threats to the resources. As a result, the model can be used to conduct multi-dimensional risk calculations for crisis management CI resource configurations.

  4. Proportioning of U3O8 powder

    International Nuclear Information System (INIS)

    Cermak, V.; Markvart, M.; Novy, P.; Vanka, M.

    1989-01-01

    The tests are briefly described or proportioning U 3 O 8 powder of a granulometric grain size range of 0-160 μm using a vertical screw, a horizontal dual screw and a vibration dispenser with a view to proportioning very fine U 3 O 8 powder fractions produced in the oxidation of UO 2 fuel pellets. In the tests, the evenness of proportioning was assessed by the percentage value of the proportioning rate spread measured at one-minute intervals at a proportioning rate of 1-3 kg/h. In feeding the U 3 O 3 in a flame fluorator, it is advantageous to monitor the continuity of the powder column being proportioned and to assess it radiometrically by the value of the proportioning rate spread at very short intervals (0.1 s). (author). 10 figs., 1 tab., 12 refs

  5. Modeling the GLOF Hazard Process Chain at Imja Lake in the Nepal Himalaya

    Science.gov (United States)

    Lala, J.; McKinney, D. C.; Rounce, D.

    2017-12-01

    The Hindu Kush-Himalaya region contains more glacial ice than any other non-polar region on earth. Many glacial lakes in Nepal are held in place by natural moraine dams, which are inherently unstable. Avalanches or landslides entering glacial lakes can cause tsunami-like waves that can overtop the moraines and trigger glacial lake outburst floods (GLOF). Mass loss at the Imja glacier is the highest in the Mount Everest region, and contributes to the expansion of Imja Tsho, a lake with several villages downstream. A GLOF from the lake might destroy both property and human life, making an understanding of flood triggering processes beneficial for both the downstream villages and other GLOF-prone areas globally. The process chain for an avalanche-induced GLOF was modeled numerically. The volume and velocity of debris from avalanches entering various future lake extents were calculated using RAMMS. Resulting waves and downstream flooding were simulated using BASEMENT to evaluate erosion at the terminal moraine. Wave characteristics in BASEMENT were validated with empirical equations to ensure the proper transfer of momentum from the avalanche to the lake. Moraine erosion was determined for two geomorphologic scenarios: a site-specific scenario using field samples, and a worst-case scenario based on past literature. Both cases resulted in no flooding outside the river channel at downstream villages. Worst-case scenario geomorphology resulted in increased channelization of the lake outlet and some moraine erosion but no catastrophic collapse. Site-specific data yielded similar results but with even less erosion and downstream discharge. While the models confirmed that Imja Tsho is unlikely to produce a catastrophic GLOF in the near future, they also highlight the importance of continued monitoring of the lake. Furthermore, the ease and flexibility of these methods allows for their adoption by a wide range of stakeholders for modeling other high-risk lakes.

  6. Applying the Land Use Portfolio Model to Estimate Natural-Hazard Loss and Risk - A Hypothetical Demonstration for Ventura County, California

    Science.gov (United States)

    Dinitz, Laura B.

    2008-01-01

    With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS

  7. Transposing an active fault database into a seismic hazard fault model for nuclear facilities. Pt. 1. Building a database of potentially active faults (BDFA) for metropolitan France

    Energy Technology Data Exchange (ETDEWEB)

    Jomard, Herve; Cushing, Edward Marc; Baize, Stephane; Chartier, Thomas [IRSN - Institute of Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Palumbo, Luigi; David, Claire [Neodyme, Joue les Tours (France)

    2017-07-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15% of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  8. Predictive modeling of hazardous waste landfill total above-ground biomass using passive optical and LIDAR remotely sensed data

    Science.gov (United States)

    Hadley, Brian Christopher

    This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.

  9. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    Science.gov (United States)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  10. Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts

    Science.gov (United States)

    Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan

    2015-04-01

    Building precise and up-to-date coastal DEMs is a prerequisite for accurate modeling and forecasting of hydrodynamic processes at local scale. Marine flooding, originating from tsunamis, storm surges or waves, is one of them. Some high resolution DEMs are being generated for multiple coast configurations (gulf, embayment, strait, estuary, harbor approaches, low-lying areas…) along French Atlantic and Channel coasts. This work is undertaken within the framework of the TANDEM project (Tsunamis in the Atlantic and the English ChaNnel: Definition of the Effects through numerical Modeling) (2014-2017). DEMs boundaries were defined considering the vicinity of French civil nuclear facilities, site effects considerations and potential tsunamigenic sources. Those were identified from available historical observations. Seamless integrated topographic and bathymetric coastal DEMs will be used by institutions taking part in the study to simulate expected wave height at regional and local scale on the French coasts, for a set of defined scenarii. The main tasks were (1) the development of a new capacity of production of DEM, (2) aiming at the release of high resolution and precision digital field models referred to vertical reference frameworks, that require (3) horizontal and vertical datum conversions (all source elevation data need to be transformed to a common datum), on the basis of (4) the building of (national and/or local) conversion grids of datum relationships based on known measurements. Challenges in coastal DEMs development deal with good practices throughout model development that can help minimizing uncertainties. This is particularly true as scattered elevation data with variable density, from multiple sources (national hydrographic services, state and local government agencies, research organizations and private engineering companies) and from many different types (paper fieldsheets to be digitized, single beam echo sounder, multibeam sonar, airborne laser

  11. Comparison of exact, efron and breslow parameter approach method on hazard ratio and stratified cox regression model

    Science.gov (United States)

    Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian

    2018-04-01

    Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.

  12. Seismic Hazard of the Uttarakhand Himalaya, India, from Deterministic Modeling of Possible Rupture Planes in the Area

    Directory of Open Access Journals (Sweden)

    Anand Joshi

    2013-01-01

    Full Text Available This paper presents use of semiempirical method for seismic hazard zonation. The seismotectonically important region of Uttarakhand Himalaya has been considered in this work. Ruptures along the lineaments in the area identified from tectonic map are modeled deterministically using semi empirical approach given by Midorikawa (1993. This approach makes use of attenuation relation of peak ground acceleration for simulating strong ground motion at any site. Strong motion data collected over a span of three years in this region have been used to develop attenuation relation of peak ground acceleration of limited magnitude and distance applicability. The developed attenuation relation is used in the semi empirical method to predict peak ground acceleration from the modeled rupture planes in the area. A set of values of peak ground acceleration from possible ruptures in the area at the point of investigation is further used to compute probability of exceedance of peak ground acceleration of values 100 and 200 gals. The prepared map shows that regions like Tehri, Chamoli, Almora, Srinagar, Devprayag, Bageshwar, and Pauri fall in a zone of 10% probability of exceedence of peak ground acceleration of value 200 gals.

  13. [Hazard function and life table: an introduction to the failure time analysis].

    Science.gov (United States)

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  14. Risk patterns in drug safety study using relative times by accelerated failure time models when proportional hazards assumption is questionable : an illustrative case study of cancer risk of patients on glucose-lowering therapies

    NARCIS (Netherlands)

    Ng, Edmond S-W; Klungel, Olaf H; Groenwold, Rolf H H; van Staa, Tjeerd-Pieter

    2015-01-01

    Observational drug safety studies may be susceptible to confounding or protopathic bias. This bias may cause a spurious relationship between drug exposure and adverse side effect when none exists and may lead to unwarranted safety alerts. The spurious relationship may manifest itself through

  15. The Prospect of using Three-Dimensional Earth Models To Improve Nuclear Explosion Monitoring and Ground Motion Hazard Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zucca, J J; Walter, W R; Rodgers, A J; Richards, P; Pasyanos, M E; Myers, S C; Lay, T; Harris, D; Antoun, T

    2008-11-19

    The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags

  16. Application of probabilistic seismic hazard models with special calculation for the waste storage sites in Egypt

    International Nuclear Information System (INIS)

    Othman, A.A.; El-Hemamy, S.T.

    2000-01-01

    Probabilistic strong motion maps of Egypt are derived by applying Gumbel models and likelihood method to 8 earthquake source zones in Egypt and adjacent regions. Peak horizontal acceleration is mapped. Seismic data are collected from Helwan Catalog (1900-1997), regional catalog of earthquakes from the International Seismological Center (ISC,1910-1993) and earthquake data reports of US Department of International Geological Survey (USCGS, 1900-1994). Iso-seismic maps are also available for some events, which occurred in Egypt. Some earthquake source zones are well defined on the basis of both tectonics and average seismicity rates, but a lack of understanding of the near field effects of the large earthquakes prohibits accurate estimates of ground motion in their vicinity. Some source zones have no large-scale crustal features or zones of weakness that can explain the seismicity and must, therefore, be defined simply as concentrations of seismic activity with no geological or geophysical controls on the boundaries. Other source zones lack information on low-magnitude seismicity that would be representative of longer periods of time. Comparisons of the new probabilistic ground motion estimates in Egypt with equivalent estimates made in 1990 have been done. The new ground motion estimates are used to produce a new peak ground acceleration map to replace the 1990 peak acceleration zoning maps in the Building code of Egypt. (author)

  17. Modeling human exposure to hazardous-waste sites: a question of completeness

    International Nuclear Information System (INIS)

    Daniels, J.I.; McKone, T.E.

    1991-01-01

    In risk analysis, we use human-exposure assessments to translate contaminant sources into quantitative estimates of the amount of contaminant that comes in contact with human-environment boundaries, that is, the lungs, the gastrointestinal tract, and the skin surface of individuals within a specified population. An assessment of intake requires that we determine how much crosses these boundaries. Exposure assessments often rely implicitly in the assumption that exposure can be linked by simple parameters to ambient concentration in air, water, and soil. However, more realistic exposure models require that we abandon such simple assumptions. To link contaminant concentrations in water, air, or soil with potential human intakes, we constrict pathway-exposure factors (PEFs). For each PEF we combine information in environmental partitioning as well as human anatomy, physiology, and patterns into an algebraic term that converts concentrations of contaminants (in mg/L water, mg/m 3 air, and mg/kg soil) into a daily intake per unit body weight in mg/kg-d for a specific rout of exposure such as inhalation, ingestion, or dermal uptake. Using examples involving human exposure to either a radionuclide (tritium, 3 H) or a toxic organic chemical (tetrachloroethylene, PCE) in soil, water, and air, we illustrate the use of PEFs and consider the implications for risk assessment. (au)

  18. Radioactive hazards

    International Nuclear Information System (INIS)

    Gill, J.R.

    1980-01-01

    The use of radioactive substances in hospital laboratories is discussed and the attendant hazards and necessary precautions examined. The new legislation under the Health and Safety at Work Act which, it is proposed, will replace existing legal requirements in the field of health and safety at work by a system of regulations and approved codes of practice designed to maintain or improve the standards of health, safety and welfare already established, is considered with particular reference to protection against ionising radiations. (UK)

  19. Potential hazards to embryo implantation: A human endometrial in vitro model to identify unwanted antigestagenic actions of chemicals

    International Nuclear Information System (INIS)

    Fischer, L.; Deppert, W.R.; Pfeifer, D.; Stanzel, S.; Weimer, M.; Hanjalic-Beck, A.; Stein, A.; Straßer, M.; Zahradnik, H.P.; Schaefer, W.R.

    2012-01-01

    Embryo implantation is a crucial step in human reproduction and depends on the timely development of a receptive endometrium. The human endometrium is unique among adult tissues due to its dynamic alterations during each menstrual cycle. It hosts the implantation process which is governed by progesterone, whereas 17β-estradiol regulates the preceding proliferation of the endometrium. The receptors for both steroids are targets for drugs and endocrine disrupting chemicals. Chemicals with unwanted antigestagenic actions are potentially hazardous to embryo implantation since many pharmaceutical antiprogestins adversely affect endometrial receptivity. This risk can be addressed by human tissue-specific in vitro assays. As working basis we compiled data on chemicals interacting with the PR. In our experimental work, we developed a flexible in vitro model based on human endometrial Ishikawa cells. Effects of antiprogestin compounds on pre-selected target genes were characterized by sigmoidal concentration–response curves obtained by RT-qPCR. The estrogen sulfotransferase (SULT1E1) was identified as the most responsive target gene by microarray analysis. The agonistic effect of progesterone on SULT1E1 mRNA was concentration-dependently antagonized by RU486 (mifepristone) and ZK137316 and, with lower potency, by 4-nonylphenol, bisphenol A and apigenin. The negative control methyl acetoacetate showed no effect. The effects of progesterone and RU486 were confirmed on the protein level by Western blotting. We demonstrated proof of principle that our Ishikawa model is suitable to study quantitatively effects of antiprogestin-like chemicals on endometrial target genes in comparison to pharmaceutical reference compounds. This test is useful for hazard identification and may contribute to reduce animal studies. -- Highlights: ► We compare progesterone receptor-mediated endometrial effects of chemicals and drugs. ► 4-Nonylphenol, bisphenol A and apigenin exert weak

  20. Potential hazards to embryo implantation: A human endometrial in vitro model to identify unwanted antigestagenic actions of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, L.; Deppert, W.R. [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany); Pfeifer, D. [Department of Hematology and Oncology, University Hospital Freiburg (Germany); Stanzel, S.; Weimer, M. [Department of Biostatistics, German Cancer Research Center, Heidelberg (Germany); Hanjalic-Beck, A.; Stein, A.; Straßer, M.; Zahradnik, H.P. [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany); Schaefer, W.R., E-mail: wolfgang.schaefer@uniklinik-freiburg.de [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany)

    2012-05-01

    Embryo implantation is a crucial step in human reproduction and depends on the timely development of a receptive endometrium. The human endometrium is unique among adult tissues due to its dynamic alterations during each menstrual cycle. It hosts the implantation process which is governed by progesterone, whereas 17β-estradiol regulates the preceding proliferation of the endometrium. The receptors for both steroids are targets for drugs and endocrine disrupting chemicals. Chemicals with unwanted antigestagenic actions are potentially hazardous to embryo implantation since many pharmaceutical antiprogestins adversely affect endometrial receptivity. This risk can be addressed by human tissue-specific in vitro assays. As working basis we compiled data on chemicals interacting with the PR. In our experimental work, we developed a flexible in vitro model based on human endometrial Ishikawa cells. Effects of antiprogestin compounds on pre-selected target genes were characterized by sigmoidal concentration–response curves obtained by RT-qPCR. The estrogen sulfotransferase (SULT1E1) was identified as the most responsive target gene by microarray analysis. The agonistic effect of progesterone on SULT1E1 mRNA was concentration-dependently antagonized by RU486 (mifepristone) and ZK137316 and, with lower potency, by 4-nonylphenol, bisphenol A and apigenin. The negative control methyl acetoacetate showed no effect. The effects of progesterone and RU486 were confirmed on the protein level by Western blotting. We demonstrated proof of principle that our Ishikawa model is suitable to study quantitatively effects of antiprogestin-like chemicals on endometrial target genes in comparison to pharmaceutical reference compounds. This test is useful for hazard identification and may contribute to reduce animal studies. -- Highlights: ► We compare progesterone receptor-mediated endometrial effects of chemicals and drugs. ► 4-Nonylphenol, bisphenol A and apigenin exert weak

  1. Precarious Rock Methodology for Seismic Hazard: Physical Testing, Numerical Modeling and Coherence Studies

    Energy Technology Data Exchange (ETDEWEB)

    Anooshehpoor, Rasool; Purvance, Matthew D.; Brune, James N.; Preston, Leiph A.; Anderson, John G.; Smith, Kenneth D.

    2006-09-29

    This report covers the following projects: Shake table tests of precarious rock methodology, field tests of precarious rocks at Yucca Mountain and comparison of the results with PSHA predictions, study of the coherence of the wave field in the ESF, and a limited survey of precarious rocks south of the proposed repository footprint. A series of shake table experiments have been carried out at the University of Nevada, Reno Large Scale Structures Laboratory. The bulk of the experiments involved scaling acceleration time histories (uniaxial forcing) from 0.1g to the point where the objects on the shake table overturned a specified number of times. The results of these experiments have been compared with numerical overturning predictions. Numerical predictions for toppling of large objects with simple contact conditions (e.g., I-beams with sharp basal edges) agree well with shake-table results. The numerical model slightly underpredicts the overturning of small rectangular blocks. It overpredicts the overturning PGA for asymmetric granite boulders with complex basal contact conditions. In general the results confirm the approximate predictions of previous studies. Field testing of several rocks at Yucca Mountain has approximately confirmed the preliminary results from previous studies, suggesting that he PSHA predictions are too high, possibly because the uncertainty in the mean of the attenuation relations. Study of the coherence of wavefields in the ESF has provided results which will be very important in design of the canisters distribution, in particular a preliminary estimate of the wavelengths at which the wavefields become incoherent. No evidence was found for extreme focusing by lens-like inhomogeneities. A limited survey for precarious rocks confirmed that they extend south of the repository, and one of these has been field tested.

  2. Evaluation of model-predicted hazardous air pollutants (HAPs) near a mid-sized U.S. airport

    Science.gov (United States)

    Vennam, Lakshmi Pradeepa; Vizuete, William; Arunachalam, Saravanan

    2015-10-01

    Accurate modeling of aircraft-emitted pollutants in the vicinity of airports is essential to study the impact on local air quality and to answer policy and health-impact related issues. To quantify air quality impacts of airport-related hazardous air pollutants (HAPs), we carried out a fine-scale (4 × 4 km horizontal resolution) Community Multiscale Air Quality model (CMAQ) model simulation at the T.F. Green airport in Providence (PVD), Rhode Island. We considered temporally and spatially resolved aircraft emissions from the new Aviation Environmental Design Tool (AEDT). These model predictions were then evaluated with observations from a field campaign focused on assessing HAPs near the PVD airport. The annual normalized mean error (NME) was in the range of 36-70% normalized mean error for all HAPs except for acrolein (>70%). The addition of highly resolved aircraft emissions showed only marginally incremental improvements in performance (1-2% decrease in NME) of some HAPs (formaldehyde, xylene). When compared to a coarser 36 × 36 km grid resolution, the 4 × 4 km grid resolution did improve performance by up to 5-20% NME for formaldehyde and acetaldehyde. The change in power setting (from traditional International Civil Aviation Organization (ICAO) 7% to observation studies based 4%) doubled the aircraft idling emissions of HAPs, but led to only a 2% decrease in NME. Overall modeled aircraft-attributable contributions are in the range of 0.5-28% near a mid-sized airport grid-cell with maximum impacts seen only within 4-16 km from the airport grid-cell. Comparison of CMAQ predictions with HAP estimates from EPA's National Air Toxics Assessment (NATA) did show similar annual mean concentrations and equally poor performance. Current estimates of HAPs for PVD are a challenge for modeling systems and refinements in our ability to simulate aircraft emissions have made only incremental improvements. Even with unrealistic increases in HAPs aviation emissions the model

  3. Why do card issuers charge proportional fees?

    OpenAIRE

    Oz Shy; Zhu Wang

    2008-01-01

    This paper explains why payment card companies charge consumers and merchants fees which are proportional to the transaction values instead of charging a fixed per-transaction fee. Our theory shows that, even in the absence of any cost considerations, card companies earn much higher profit when they charge proportional fees. It is also shown that competition among merchants reduces card companies' gains from using proportional fees relative to a fixed per-transaction fee. Merchants are found ...

  4. Chapter 3: Simulating fire hazard across landscapes through time: integrating state-and-transition models with the Fuel Characteristic Classification System

    Science.gov (United States)

    Jessica E. Halofsky; Stephanie K. Hart; Miles A. Hemstrom; Joshua S. Halofsky; Morris C. Johnson

    2014-01-01

    Information on the effects of management activities such as fuel reduction treatments and of processes such as vegetation growth and disturbance on fire hazard can help land managers prioritize treatments across a landscape to best meet management goals. State-and-transition models (STMs) allow landscape-scale simulations that incorporate effects of succession,...

  5. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    International Nuclear Information System (INIS)

    Yucemen, S.

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs

  6. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    Energy Technology Data Exchange (ETDEWEB)

    Yucemen, S [Middle East Technical Univ., Ankara (Turkey). Dept. of Statistics

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs.

  7. Toward risk assessment 2.0: Safety supervisory control and model-based hazard monitoring for risk-informed safety interventions

    International Nuclear Information System (INIS)

    Favarò, Francesca M.; Saleh, Joseph H.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA's limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation

  8. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    Science.gov (United States)

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  9. Leaching of hazardous substances from a composite construction product – An experimental and modelling approach for fibre-cement sheets

    Energy Technology Data Exchange (ETDEWEB)

    Lupsea, Maria [University of Toulouse, INSA, UPS, INP, LISBP, 135 Avenue de Rangueil, F-31077 Toulouse (France); INRA, UMR 792, F-31400 Toulouse (France); CNRS, UMR 5504, F-31400 Toulouse (France); Paris–Est University, CSTB–Scientific and Technical Centre for the Building Industry, DEE/Environmentand Life Cycle Engineering Team, 24 rue Joseph Fourier, F–38400 Saint Martin d’Hères (France); Tiruta-Barna, Ligia, E-mail: ligia.barna@insa-toulouse.fr [University of Toulouse, INSA, UPS, INP, LISBP, 135 Avenue de Rangueil, F-31077 Toulouse (France); INRA, UMR 792, F-31400 Toulouse (France); CNRS, UMR 5504, F-31400 Toulouse (France); Schiopu, Nicoleta [Paris–Est University, CSTB–Scientific and Technical Centre for the Building Industry, DEE/Environmentand Life Cycle Engineering Team, 24 rue Joseph Fourier, F–38400 Saint Martin d’Hères (France)

    2014-01-15

    Highlights: • Biocide and heavy metals leaching from fibre-cement sheet was investigated. • Equilibrium and dynamic leaching tests were used as modelling support. • The chemical-transport model identifies the main fixation/solubilisation mechanisms. • Biocides as terbutryn and boron were released by the commercial product. • FCS exhibit a cement-like leaching behaviour with high organic carbon release. -- Abstract: The leaching behaviour of a commercial fibre-cement sheet (FCS) product has been investigated. A static pH dependency test and a dynamic surface leaching test have been performed at lab scale. These tests allowed the development of a chemical-transport model capable to predict the release of major and trace elements over the entire pH range, in function of time. FCS exhibits a cement-type leaching behaviour with respect to the mineral species. Potentially hazardous species are released in significant quantities when compared to their total content. These are mainly heavy metals commonly encountered in cement matrixes and boron (probably added as biocide). Organic compounds considered as global dissolved carbon are released in significant concentrations, originating probably from the partial degradation of the organic fibres. The pesticide terbutryn (probably added during the preservative treatment of the organic fibres) was systematically identified in the leachates. The simulation of an upscaled runoff scenario allowed the evaluation of the cumulative release over long periods and the distribution of the released quantities in time, in function of the local exposure conditions. After 10 years of exposure the release reaches significant fractions of the species’ total content – going from 4% for Cu to near 100% for B.

  10. Fatigue crack growth in mixed mode I+III+III non proportional loading conditions in a 316 stainless steel, experimental analysis and modelization of the effects of crack tip plasticity

    International Nuclear Information System (INIS)

    Fremy, F.

    2012-01-01

    This thesis deals with fatigue crack growth in non-proportional variable amplitude mixed mode I + II + III loading conditions and analyses the effects of internal stresses stemming from the confinement of the plastic zone in small scale yielding conditions. The tests showed that there are antagonistic long-distance and short-distance effects of the loading history on fatigue crack growth. The shape of loading path, and not only the maximum and minimum values in this path, is crucial and, by comparison, the effects of contact and friction are of lesser importance. Internal stresses play a major role on the fatigue crack growth rate and on the crack path. An approach was developed to analyze the elastic-plastic behavior of a representative section of the crack front using the FEA. A model reduction technic is used to extract the relevant information from the FE results. To do so, the velocity field is partitioned into mode I, II, III elastic and plastic components, each component being characterized by an intensity factor and a fixed spatial distribution. The calculations were used to select seven loading paths in I + II and I + II + III mixed mode conditions, which all have the same amplitudes for each mode, the same maximum, minimum and average values. These paths are supposed to be equivalent in the sense of common failure criteria, but differ significantly when the elastic-plastic behavior of the material is accounted for. The results of finite element simulations and of simulations using a simplified model proposed in this thesis are both in agreement with experimental results. The approach was also used to discuss the role of mode III loading steps. Since the material behavior is nonlinear, the nominal loading direction does not coincide with the plastic flow direction. Adding a mode III loading step in a mode I+II fatigue cycle, may, in some cases, significantly modify the behaviour of the crack (crack growth rate, crack path and plastic flow). (author)

  11. Proportional Reasoning and the Visually Impaired

    Science.gov (United States)

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  12. Adaptive bayesian analysis for binomial proportions

    CSIR Research Space (South Africa)

    Das, Sonali

    2008-10-01

    Full Text Available of testing the proportion of some trait. For example, say, we are interested to infer about the effectiveness of a certain intervention teaching strategy, by comparing proportion of ‘proficient’ teachers, before and after an intervention. The number...

  13. Mix Proportion Design of Asphalt Concrete

    Science.gov (United States)

    Wu, Xianhu; Gao, Lingling; Du, Shoujun

    2017-12-01

    Based on the gradation of AC and SMA, this paper designs a new type of anti slide mixture with two types of advantages. Chapter introduces the material selection, ratio of ore mixture ratio design calculation, and determine the optimal asphalt content test and proportioning design of asphalt concrete mix. This paper introduces the new technology of mix proportion.

  14. Proportional gas scintillation detectors and their applications

    International Nuclear Information System (INIS)

    Petr, I.

    1978-01-01

    The principle is described of a gas proportional scintillation detector and its function. Dependence of Si(Li) and xenon proportional detectors energy resolution on the input window size is given. A typical design is shown of a xenon detector used for X-ray spetrometry at an energy of 277 eV to 5.898 keV and at a gas pressure of 98 to 270 kPa. Gas proportional scintillation detectors show considerable better energy resolution than common proportional counters and even better resolution than semiconductor Si(Li) detectors for low X radiation energies. For detection areas smaller than 25 mm 2 Si(Li) detectors show better resolution, especially for higher X radiation energies. For window areas 25 to 190 mm 2 both types of detectors are equal, for a window area exceeding 190 mm 2 the proportional scintillation detector has higher energy resolution. (B.S.)

  15. A multi criteria analog model for assessing the vulnerability of rural catchments to road spills of hazardous substances

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Hygor Evangelista; Pissarra, Teresa Cristina Tarlé [Departamento de Engenharia Rural, Faculdade de Ciências Agrárias e Veterinárias, Universidade Estadual Paulista, Jaboticabal (Brazil); Farias do Valle Junior, Renato [Laboratório de Geoprocessamento, Instituto Federal do Triângulo Mineiro, Campus Uberaba, Uberaba (Brazil); Fernandes, Luis Filipe Sanches [Centro de Investigação e Tecnologias Agroambientais e Biológicas, Universidade de Trás-os-Montes e Alto Douro, Ap 1013, 5001–801 Vila Real (Portugal); Pacheco, Fernando António Leal, E-mail: fpacheco@utad.pt [Centro de Química de Vila Real, Universidade de Trás-os-Montes e Alto Douro, Ap 1013, 5001–801 Vila Real (Portugal)

    2017-05-15

    Road spills of hazardous substances are common in developing countries due to increasing industrialization and traffic accidents, and represent a serious threat to soils and water in catchments. There is abundant literature on equations describing the wash-off of pollutants from roads during a storm event and there are a number of watershed models incorporating those equations in storm water quality algorithms that route runoff and pollution yields through a drainage system towards the catchment outlet. However, methods describing catchment vulnerability to contamination by road spills based solely on biophysical parameters are scarce. These methods could be particularly attractive to managers because they can operate with a limited amount of easily collectable data, while still being able to provide important insights on the areas more prone to contamination within the studied watershed. The purpose of this paper was then to contribute with a new vulnerability model. To accomplish the goal, a selection of medium properties appearing in wash-off equations and routing algorithms were assembled and processed in a parametric framework based on multi criteria analysis to define the watershed vulnerability. However, parameters had to be adapted because wash-off equations and water quality models have been developed to operate primarily in the urban environment while the vulnerability model is meant to run in rural watersheds. The selected parameters were hillside slope, ground roughness (depending on land use), soil permeability (depending on soil type), distance to water courses and stream density. The vulnerability model is a spatially distributed algorithm that was prepared to run under the IDRISI Selva software, a GIS platform capable of handling spatial and alphanumeric data and execute the necessary terrain model, hydrographic and thematic analyses. For illustrative purposes, the vulnerability model was applied to the legally protected Environmental Protection

  16. A multi criteria analog model for assessing the vulnerability of rural catchments to road spills of hazardous substances

    International Nuclear Information System (INIS)

    Siqueira, Hygor Evangelista; Pissarra, Teresa Cristina Tarlé; Farias do Valle Junior, Renato; Fernandes, Luis Filipe Sanches; Pacheco, Fernando António Leal

    2017-01-01

    Road spills of hazardous substances are common in developing countries due to increasing industrialization and traffic accidents, and represent a serious threat to soils and water in catchments. There is abundant literature on equations describing the wash-off of pollutants from roads during a storm event and there are a number of watershed models incorporating those equations in storm water quality algorithms that route runoff and pollution yields through a drainage system towards the catchment outlet. However, methods describing catchment vulnerability to contamination by road spills based solely on biophysical parameters are scarce. These methods could be particularly attractive to managers because they can operate with a limited amount of easily collectable data, while still being able to provide important insights on the areas more prone to contamination within the studied watershed. The purpose of this paper was then to contribute with a new vulnerability model. To accomplish the goal, a selection of medium properties appearing in wash-off equations and routing algorithms were assembled and processed in a parametric framework based on multi criteria analysis to define the watershed vulnerability. However, parameters had to be adapted because wash-off equations and water quality models have been developed to operate primarily in the urban environment while the vulnerability model is meant to run in rural watersheds. The selected parameters were hillside slope, ground roughness (depending on land use), soil permeability (depending on soil type), distance to water courses and stream density. The vulnerability model is a spatially distributed algorithm that was prepared to run under the IDRISI Selva software, a GIS platform capable of handling spatial and alphanumeric data and execute the necessary terrain model, hydrographic and thematic analyses. For illustrative purposes, the vulnerability model was applied to the legally protected Environmental Protection

  17. Using Multi-Scenario Tsunami Modelling Results combined with Probabilistic Analyses to provide Hazard Information for the South-WestCoast of Indonesia

    Science.gov (United States)

    Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G.

    2009-04-01

    Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of

  18. Volcanogenic SO2, a natural pollutant: Measurements, modeling and hazard assessment at Vulcano Island (Aeolian Archipelago, Italy).

    Science.gov (United States)

    Granieri, Domenico; Vita, Fabio; Inguaggiato, Salvatore

    2017-12-01

    Sulfur dioxide (SO 2 ) is a major component of magmatic gas discharges. Once emitted in the atmosphere it can affect the air and land environment at different spatial and temporal scales, with harmful effects on human health and plant communities. We used a dense dataset of continuous SO 2 flux and meteorological measurements collected at Vulcano over an 8-year period spanning from May 2008 to February 2016 to model air SO 2 concentrations over the island. To this end, we adopted the DISGAS (DISpersion of GAS) numerical code coupled with the Diagnostic Wind Model (DWM). SO 2 concentrations in air were determined for three different SO 2 emission rates: a reference SO 2 flux of ∼18 t/d (the median of more than 800 measurements), an enhanced SO 2 flux of 40 t/d (average of all measurements plus 1 σ), and a maximum SO 2 flux of 106 t/d (maximum value measured in the investigated period). Maximum SO 2 concentrations in air were estimated at the crater, near the high-T fumarole field that is the source of the gas, and ranged from 2000 ppb to ∼24,000 ppb for the reference flux, from 2000 ppb to 51,000 ppb for the enhanced flux and from 5000 ppb to 136,000 ppb for the maximum flux, with peak values in limited areas at the bottom of the crater. These concentrations pose a hazard for people visiting the crater, for sensitive individuals in particular. Based on estimated SO 2 concentrations in air, we also consider the phytotoxic effects of SO 2 on local vegetation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Flood Hazard Zonation by Combining Mod-Clark and HEC-RAS Models in Bustan Dam Basin, Golestan Province

    Directory of Open Access Journals (Sweden)

    Z. Parisay

    2014-12-01

    Full Text Available Flood is one of the devastating phenomena which every year incurs casualties and property damages. Flood zonation is an efficient technique for flood management. The main goal of this research is flood hazard and risk zonation along a 21 km reach of the Gorganrud river in Bustan dam watershed considering two conditions: present landuse condition and scenario planning. To this end a combination of a hydrologic model (the distributed HEC-HMS with the Mod-Clark transform option and a hydraulic model (HEC-RAS were used. The required inputs to run the Mod-Clarck module of HEC-HMS are gridded files of river basin, curve number and rainfall with the SHG coordinate system and DSS format. In this research the input files were prepared using the Watershed Modeling System (WMS at cell size of 200 m. Since the Mod-Clark method requires rainfall data as radar format (NEXRAD, the distributed rainfall mapseries with time intervals of 15 minutes prepared within the PCRaster GIS system were converted to the DSS format using the asc2dss package. also the curve number map was converted to the DSS format using HEC-GeoHMS. Then, these DSS files were substituted with rainfall and curve number maps within the WMS. After calibration and validation, model was run for return periods of 2, 5, 10, 25, 50, 100 and 200 years, in two conditions of current landuse and scenario planning. The simulated peak discharge data, geometric parameters of river and cross section (at 316 locations data prepared by the HEC-GeoRAS software and roughness coefficients data, were used by the HEC-RAS software to simulate the hydraulic behavior of the river and flood inundation area maps were produced using GIS. The results of the evaluation showed that in addition to the percent error in peak flow, less than 3.2%, the model has a good performance in peak flow simulation, but is not successful in volume estimation. The results of flood zones revealed that from the total area in floodplain with