The Additive Hazard Mixing Models
Institute of Scientific and Technical Information of China (English)
Ping LI; Xiao-liang LING
2012-01-01
This paper is concerned with the aging and dependence properties in the additive hazard mixing models including some stochastic comparisons.Further,some useful bounds of reliability functions in additive hazard mixing models are obtained.
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
Coordinate descent methods for the penalized semiparametric additive hazards model
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires......For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity...
High-dimensional additive hazard models and the Lasso
Gaïffas, Séphane
2011-01-01
We consider a general high-dimensional additive hazard model in a non-asymptotic setting, including regression for censored-data. In this context, we consider a Lasso estimator with a fully data-driven $\\ell_1$ penalization, which is tuned for the estimation problem at hand. We prove sharp oracle inequalities for this estimator. Our analysis involves a new "data-driven" Bernstein's inequality, that is of independent interest, where the predictable variation is replaced by the optional variation.
Coordinate descent methods for the penalized semiprarametric additive hazard model
DEFF Research Database (Denmark)
Gorst-Rasmussen, Anders; Scheike, Thomas
2012-01-01
For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity....
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
Additive Hazard Regression Models: An Application to the Natural History of Human Papillomavirus
Directory of Open Access Journals (Sweden)
Xianhong Xie
2013-01-01
Full Text Available There are several statistical methods for time-to-event analysis, among which is the Cox proportional hazards model that is most commonly used. However, when the absolute change in risk, instead of the risk ratio, is of primary interest or when the proportional hazard assumption for the Cox proportional hazards model is violated, an additive hazard regression model may be more appropriate. In this paper, we give an overview of this approach and then apply a semiparametric as well as a nonparametric additive model to a data set from a study of the natural history of human papillomavirus (HPV in HIV-positive and HIV-negative women. The results from the semiparametric model indicated on average an additional 14 oncogenic HPV infections per 100 woman-years related to CD4 count < 200 relative to HIV-negative women, and those from the nonparametric additive model showed an additional 40 oncogenic HPV infections per 100 women over 5 years of followup, while the estimated hazard ratio in the Cox model was 3.82. Although the Cox model can provide a better understanding of the exposure disease association, the additive model is often more useful for public health planning and intervention.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
[download] (1035Coordinate Descent Methods for the Penalized Semiparametric Additive Hazards Model
Directory of Open Access Journals (Sweden)
Anders Gorst-Rasmussen
2012-04-01
Full Text Available For survival data with a large number of explanatory variables, lasso penalized Cox regression is a popular regularization strategy. However, a penalized Cox model may not always provide the best fit to data and can be difficult to estimate in high dimension because of its intrinsic nonlinearity. The semiparametric additive hazards model is a flexible alternative which is a natural survival analogue of the standard linear regression model. Building on this analogy, we develop a cyclic coordinate descent algorithm for fitting the lasso and elastic net penalized additive hazards model. The algorithm requires no nonlinear optimization steps and offers excellent performance and stability. An implementation is available in the R package ahaz. We demonstrate this implementation in a small timing study and in an application to real data.
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study the...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients...
Institute of Scientific and Technical Information of China (English)
Huan-bin Liu; Liu-quan Sun; Li-xing Zhu
2005-01-01
Many survival studies record the times to two or more distinct failures on each subject. The failures may be events of different natures or may be repetitions of the same kind of event. In this article, we consider the regression analysis of such multivariate failure time data under the additive hazards model. Simple weighted estimating functions for the regression parameters are proposed, and asymptotic distribution theory of the resulting estimators are derived. In addition, a class of generalized Wald and generalized score statistics for hypothesis testing and model selection are presented, and the asymptotic properties of these statistics are examined.
Estimation of direct effects for survival data by using the Aalen additive hazards model
DEFF Research Database (Denmark)
Martinussen, T.; Vansteelandt, S.; Gerster, M.
2011-01-01
We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Predicting the Survival Time for Bladder Cancer Using an Addi-tive Hazards Model in Microarray Data
Directory of Open Access Journals (Sweden)
Leili TAPAK
2016-02-01
Full Text Available Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time.Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods.Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07 and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively. Five out of 19 selected genes by the elastic net were significant (P<0.05 under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it.Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model.Keywords: Survival analysis, Microarray data, Additive hazards model, Variable selection, Bladder cancer
信用传染违约Aalen加性风险模型%Credit Contagion Default Aalen Additive Hazard Model
Institute of Scientific and Technical Information of China (English)
田军; 周勇
2012-01-01
In this paper we consider the credit risk default models based on additive hazard model. Not only do we incorporate the macroeconomic and firm-specific conditions, but also by introducing industry-specific covariates we characterize the credit contagion between industries. In this way, we overcome underestimating by models before. We provide maximum likelihood estimators and their asymptotic properties for the parametric additive hazard model. Two estimating methods are considered, and then we get that optimal weight estimate is more effective. This paper also consider the semi-parametric additive hazard model, under this model we provide estimators and their asymptotic properties based on estimating equations of martingale. Finally we get good results through simulation.%本文考虑了基于加性风险模型的信用风险违约预报模型,不但考虑了宏观因素和公司个体因素,并且通过引入行业因素来刻画公司间可能存在的不同于宏观因素的信用传染效应,由此克服了以往模型对违约相关性的低估.本文在参数加性风险模型下给出极大似然估计及渐近性,提出两种估计方法并比较二者表现,得到最优权估计更加有效.同时本文还考虑了半参数的风险模型,并基于鞅的估计方程得到其估计及渐近性,均得到不错的结果.
The Additive-multiplicative Hazards Mo del for Multiple Typ e of Recurrent Gap Times
Institute of Scientific and Technical Information of China (English)
Zhang Qi-xian; Liu Ji-cai; Guan Qiang
2015-01-01
Recurrent event gap times data frequently arise in biomedical studies and often more than one type of event is of interest. To evaluate the effects of covariates on the marginal recurrent event hazards functions, there exist two types of hazards models: the multiplicative hazards model and the additive hazards model. In the paper, we propose a more flexible additive-multiplicative hazards model for multiple type of recurrent gap times data, wherein some covariates are assumed to be additive while others are multiplicative. An estimating equation approach is presented to estimate the regression parameters. We establish asymptotic properties of the proposed estimators.
Computer Model Locates Environmental Hazards
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Models of volcanic eruption hazards
Energy Technology Data Exchange (ETDEWEB)
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
Crossing Hazard Functions in Common Survival Models.
Zhang, Jiajia; Peng, Yingwei
2009-10-15
Crossing hazard functions have extensive applications in modeling survival data. However, existing studies in the literature mainly focus on comparing crossed hazard functions and estimating the time at which the hazard functions cross, and there is little theoretical work on conditions under which hazard functions from a model will have a crossing. In this paper, we investigate crossing status of hazard functions from the proportional hazards (PH) model, the accelerated hazard (AH) model, and the accelerated failure time (AFT) model. We provide and prove conditions under which the hazard functions from the AH and the AFT models have no crossings or a single crossing. A few examples are also provided to demonstrate how the conditions can be used to determine crossing status of hazard functions from the three models.
Modeling lahar behavior and hazards
Manville, Vernon; Major, Jon J.; Fagents, Sarah A.
2013-01-01
Lahars are highly mobile mixtures of water and sediment of volcanic origin that are capable of traveling tens to > 100 km at speeds exceeding tens of km hr-1. Such flows are among the most serious ground-based hazards at many volcanoes because of their sudden onset, rapid advance rates, long runout distances, high energy, ability to transport large volumes of material, and tendency to flow along existing river channels where populations and infrastructure are commonly concentrated. They can grow in volume and peak discharge through erosion and incorporation of external sediment and/or water, inundate broad areas, and leave deposits many meters thick. Furthermore, lahars can recur for many years to decades after an initial volcanic eruption, as fresh pyroclastic material is eroded and redeposited during rainfall events, resulting in a spatially and temporally evolving hazard. Improving understanding of the behavior of these complex, gravitationally driven, multi-phase flows is key to mitigating the threat to communities at lahar-prone volcanoes. However, their complexity and evolving nature pose significant challenges to developing the models of flow behavior required for delineating their hazards and hazard zones.
Comparative Distributions of Hazard Modeling Analysis
Directory of Open Access Journals (Sweden)
Rana Abdul Wajid
2006-07-01
Full Text Available In this paper we present the comparison among the distributions used in hazard analysis. Simulation technique has been used to study the behavior of hazard distribution modules. The fundamentals of Hazard issues are discussed using failure criteria. We present the flexibility of the hazard modeling distribution that approaches to different distributions.
Directory of Open Access Journals (Sweden)
Arefa Jafarzadeh Kohneloo
2015-09-01
Full Text Available Background: Recent studies have shown that effective genes on survival time of cancer patients play an important role as a risk factor or preventive factor. Present study was designed to determine effective genes on survival time for diffuse large B-cell lymphoma patients and predict the survival time using these selected genes. Materials & Methods: Present study is a cohort study was conducted on 40 patients with diffuse large B-cell lymphoma. For these patients, 2042 gene expression was measured. In order to predict the survival time, the composition of the semi-parametric additive survival model with two gene selection methods elastic net and lasso were used. Two methods were evaluated by plotting area under the ROC curve over time and calculating the integral of this curve. Results: Based on our findings, the elastic net method identified 10 genes, and Lasso-Cox method identified 7 genes. GENE3325X increased the survival time (P=0.006, Whereas GENE3980X and GENE377X reduced the survival time (P=0.004. These three genes were selected as important genes in both methods. Conclusion: This study showed that the elastic net method outperformed the common Lasso method in terms of predictive power. Moreover, apply the additive model instead Cox regression and using microarray data is usable way for predict the survival time of patients.
Satellite image collection modeling for large area hazard emergency response
Liu, Shufan; Hodgson, Michael E.
2016-08-01
Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.
多类型复发事件间隔时间下可加危险率模型%The Additive Hazards Model for Multiple Type Recurrent Gap Times *
Institute of Scientific and Technical Information of China (English)
刘吉彩; 张日权; 刘焕彬
2013-01-01
In many biomedical and engineering studies, recurrent event data and gap times between successive events are common and often more than one type of recurrent events is of interest. It is well known that the proportional hazards model may not be appropriate for fitting survival times in some settings. In the paper, we consider an additive hazards model for multiple type recurrent gap times data to assess the effect of covariates. For inferences about regression coe?cients and baseline cumulative hazard functions, an estimating equation approach is developed. Furthermore, we establish asymptotic properties of the proposed estimators.%在许多的生物医学和工程研究中，多类型复发事件的间隔时间数据是很常见的。众所周知，比例危险率模型在一些情况下不能很好拟合生存数据。本文，在多类型复发事件的间隔时间数据下，我们利用可加危险率模型来研究协变量对生存时间的影响程度。我们采用估计方程方法获得回归系数和基准累积危险率函数估计。并且，我们建立了所提估计的渐近分布。
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Carvalho, Francisco; Covas, Ricardo
2016-06-01
We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .
Development of Additional Hazard Assessment Models
1977-03-01
the case of continuous release of liquid, the pool teaches a maximum radius for constant liquid regression rate. Therefore, when a fire results on an...Propylene", Ind. J. Tech 5, 81 (1967). 11. Kobe, K. A. and Long, E. G., " Thermochemistry for the Petrochemical Industry, Part III - Monoolefinic
POTENTIAL HAZARDS DUE TO FOOD ADDITIVES IN ORAL HYGIENE PRODUCTS
Directory of Open Access Journals (Sweden)
Damla TUNCER-BUDANUR
2016-04-01
Full Text Available Food additives used to preserve flavor or to enhance the taste and appearance of foods are also available in oral hygiene products. The aim of this review is to provide information concerning food additives in oral hygiene products and their adverse effects. A great many of food additives in oral hygiene products are potential allergens and they may lead to allergic reactions such as urticaria, contact dermatitis, rhinitis, and angioedema. Dental practitioners, as well as health care providers, must be aware of the possibility of allergic reactions due to food additives in oral hygiene products. Proper dosage levels, delivery vehicles, frequency, potential benefits, and adverse effects of oral health products should be explained completely to the patients. There is a necessity to raise the awareness among dental professionals on this subject and to develop a data gathering system for possible adverse reactions.
A conflict model for the international hazardous waste disposal dispute
Energy Technology Data Exchange (ETDEWEB)
Hu Kaixian, E-mail: k2hu@engmail.uwaterloo.ca [Department of Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1 (Canada); Hipel, Keith W., E-mail: kwhipel@uwaterloo.ca [Department of Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1 (Canada); Fang, Liping, E-mail: lfang@ryerson.ca [Department of Mechanical and Industrial Engineering, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada)
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Hazard Warning: model misuse ahead
DEFF Research Database (Denmark)
Dickey-Collas, M.; Payne, Mark; Trenkel, V.
2014-01-01
The use of modelling approaches in marine science, and in particular fisheries science, is explored. We highlight that the choice of model used for an analysis should account for the question being posed or the context of the management problem. We examine a model-classification scheme based......-users (i.e. mangers or policy developers). The combination of attributes leads to models that are considered to have empirical, mechanistic, or analytical characteristics, but not a combination of them. In fisheries science, many examples can be found of models with these characteristics. However, we...
Business models for additive manufacturing
DEFF Research Database (Denmark)
Hadar, Ronen; Bilberg, Arne; Bogers, Marcel
2015-01-01
Digital fabrication — including additive manufacturing (AM), rapid prototyping and 3D printing — has the potential to revolutionize the way in which products are produced and delivered to the customer. Therefore, it challenges companies to reinvent their business model — describing the logic...
Proportional hazards models with discrete frailty.
Caroni, Chrys; Crowder, Martin; Kimber, Alan
2010-07-01
We extend proportional hazards frailty models for lifetime data to allow a negative binomial, Poisson, Geometric or other discrete distribution of the frailty variable. This might represent, for example, the unknown number of flaws in an item under test. Zero frailty corresponds to a limited failure model containing a proportion of units that never fail (long-term survivors). Ways of modifying the model to avoid this are discussed. The models are illustrated on a previously published set of data on failures of printed circuit boards and on new data on breaking strengths of samples of cord.
Experimental Concepts for Testing Seismic Hazard Models
Marzocchi, W.; Jordan, T. H.
2015-12-01
Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.
Thomas, Brian C
2015-01-01
Astrophysical ionizing radiation events such as supernovae, gamma-ray bursts, and solar proton events have been recognized as a potential threat to life on Earth, primarily through depletion of stratospheric ozone and subsequent increase in solar UV radiation at Earth's surface and in the upper levels of the ocean. Other work has also considered the potential impact of nitric acid rainout, concluding that no significant threat is likely. Not yet studied to-date is the potential impact of ozone produced in the lower atmosphere following an ionizing radiation event. Ozone is a known irritant to organisms on land and in water and therefore may be a significant additional hazard. Using previously completed atmospheric chemistry modeling we have examined the amount of ozone produced in the lower atmosphere for the case of a gamma-ray burst and find that the values are too small to pose a significant additional threat to the biosphere. These results may be extended to other ionizing radiation events, including supe...
Integrated Modeling for Flood Hazard Mapping Using Watershed Modeling System
Directory of Open Access Journals (Sweden)
Seyedeh S. Sadrolashrafi
2008-01-01
Full Text Available In this stduy, a new framework which integrates the Geographic Information System (GIS with the Watershed Modeling System (WMS for flood modeling is developed. It also interconnects the terrain models and the GIS software, with commercial standard hydrological and hydraulic models, including HEC-1, HEC-RAS, etc. The Dez River Basin (about 16213 km2 in Khuzestan province, IRAN, is domain of study because of occuring frequent severe flash flooding. As a case of study, a major flood in autumn of 2001 is chosen to examine the modeling framework. The model consists of a rainfall-runoff model (HEC-1 that converts excess precipitation to overland flow and channel runoff and a hydraulic model (HEC-RAS that simulates steady state flow through the river channel network based on the HEC-1, peak hydrographs. In addition, it delineates the maps of potential flood zonation for the Dez River Basin. These are achieved based on the state of the art GIS with using WMS software. Watershed parameters are calibrated manually to perform a good simulation of discharge at three sub-basins. With the calibrated discharge, WMS is capable of producing flood hazard map. The modeling framework presented in this study demonstrates the accuracy and usefulness of the WMS software for flash flooding control. The results of this research will benefit future modeling efforts by providing validate hydrological software to forecast flooding on a regional scale. This model designed for the Dez River Basin, while this regional scale model may be used as a prototype for model applications in other areas.
Marzocchi, Warner; Jordan, Thomas
2014-05-01
Probabilistic assessment has become a widely accepted procedure to estimate quantitatively natural hazards. In essence probabilities are meant to quantify the ubiquitous and deep uncertainties that characterize the evolution of natural systems. However, notwithstanding the very wide use of the terms 'uncertainty' and 'probability' in natural hazards, the way in which they are linked, how they are estimated and their scientific meaning are far from being clear, as testified by the last Intergovernmental Panel on Climate Change (IPCC) report and by its subsequent review. The lack of a formal framework to interpret uncertainty and probability coherently has paved the way for some of the strongest critics of hazard analysis; in fact, it has been argued that most of natural hazard analyses are intrinsically 'unscientific'. For example, among the concerns is the use of expert opinion to characterize the so-called epistemic uncertainties; many have argued that such personal degrees of belief cannot be measured and, by implication, cannot be tested. The purpose of this talk is to confront and clarify the conceptual issues associated with the role of uncertainty and probability in natural hazard analysis and the conditions that make a hazard model testable and then 'scientific'. Specifically, we show that testability of hazard models requires a suitable taxonomy of uncertainty embedded in a proper logical framework. This taxonomy of uncertainty is composed by aleatory variability, epistemic uncertainty, and ontological error. We discuss their differences, the link with the probability, and their estimation using data, models, and subjective expert opinion. We show that these different uncertainties, and the testability of hazard models, can be unequivocally defined only for a well-defined experimental concept that is a concept external to the model under test. All these discussions are illustrated through simple examples related to the probabilistic seismic hazard analysis.
Recent Experiences in Aftershock Hazard Modelling in New Zealand
Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.
2014-12-01
The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock
Lahar Hazard Modeling at Tungurahua Volcano, Ecuador
Sorensen, O. E.; Rose, W. I.; Jaya, D.
2003-04-01
lahar-hazard-zones using a digital elevation model (DEM), was used to construct a hazard map for the volcano. The 10 meter resolution DEM was constructed for Tungurahua Volcano using scanned topographic lines obtained from the GIS Department at the Escuela Politécnica Nacional, Quito, Ecuador. The steep topographic gradients and rapid downcutting of most rivers draining the edifice prevents the deposition of lahars on the lower flanks of Tungurahua. Modeling confirms the high degree of flow channelization in the deep Tungurahua canyons. Inundation zones observed and shown by LAHARZ at Baños yield identification of safe zones within the city which would provide safety from even the largest magnitude lahar expected.
Parametric hazard rate models for long-term sickness absence
Koopmans, Petra C.; Roelen, Corne A. M.; Groothoff, Johan W.
2009-01-01
In research on the time to onset of sickness absence and the duration of sickness absence episodes, Cox proportional hazard models are in common use. However, parametric models are to be preferred when time in itself is considered as independent variable. This study compares parametric hazard rate m
A quantitative model for volcanic hazard assessment
W. Marzocchi; Sandri, L.; Furlan, C
2006-01-01
Volcanic hazard assessment is a basic ingredient for risk-based decision-making in land-use planning and emergency management. Volcanic hazard is defined as the probability of any particular area being affected by a destructive volcanic event within a given period of time (Fournier d’Albe 1979). The probabilistic nature of such an important issue derives from the fact that volcanic activity is a complex process, characterized by several and usually unknown degrees o...
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Incident duration modeling using flexible parametric hazard-based models.
Li, Ruimin; Shang, Pan
2014-01-01
Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Directory of Open Access Journals (Sweden)
Omid Hamidi
2014-01-01
Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.
Costa, Antonio
2016-04-01
Volcanic hazards may have destructive effects on economy, transport, and natural environments at both local and regional scale. Hazardous phenomena include pyroclastic density currents, tephra fall, gas emissions, lava flows, debris flows and avalanches, and lahars. Volcanic hazards assessment is based on available information to characterize potential volcanic sources in the region of interest and to determine whether specific volcanic phenomena might reach a given site. Volcanic hazards assessment is focussed on estimating the distances that volcanic phenomena could travel from potential sources and their intensity at the considered site. Epistemic and aleatory uncertainties strongly affect the resulting hazards assessment. Within the context of critical infrastructures, volcanic eruptions are rare natural events that can create severe hazards. In addition to being rare events, evidence of many past volcanic eruptions is poorly preserved in the geologic record. The models used for describing the impact of volcanic phenomena generally represent a range of model complexities, from simplified physics based conceptual models to highly coupled thermo fluid dynamical approaches. Modelling approaches represent a hierarchy of complexity, which reflects increasing requirements for well characterized data in order to produce a broader range of output information. In selecting models for the hazard analysis related to a specific phenomenon, questions that need to be answered by the models must be carefully considered. Independently of the model, the final hazards assessment strongly depends on input derived from detailed volcanological investigations, such as mapping and stratigraphic correlations. For each phenomenon, an overview of currently available approaches for the evaluation of future hazards will be presented with the aim to provide a foundation for future work in developing an international consensus on volcanic hazards assessment methods.
VHub - Cyberinfrastructure for volcano eruption and hazards modeling and simulation
Valentine, G. A.; Jones, M. D.; Bursik, M. I.; Calder, E. S.; Gallo, S. M.; Connor, C.; Carn, S. A.; Rose, W. I.; Moore-Russo, D. A.; Renschler, C. S.; Pitman, B.; Sheridan, M. F.
2009-12-01
Volcanic risk is increasing as populations grow in active volcanic regions, and as national economies become increasingly intertwined. In addition to their significance to risk, volcanic eruption processes form a class of multiphase fluid dynamics with rich physics on many length and time scales. Risk significance, physics complexity, and the coupling of models to complex dynamic spatial datasets all demand the development of advanced computational techniques and interdisciplinary approaches to understand and forecast eruption dynamics. Innovative cyberinfrastructure is needed to enable global collaboration and novel scientific creativity, while simultaneously enabling computational thinking in real-world risk mitigation decisions - an environment where quality control, documentation, and traceability are key factors. Supported by NSF, we are developing a virtual organization, referred to as VHub, to address this need. Overarching goals of the VHub project are: Dissemination. Make advanced modeling and simulation capabilities and key data sets readily available to researchers, students, and practitioners around the world. Collaboration. Provide a mechanism for participants not only to be users but also co-developers of modeling capabilities, and contributors of experimental and observational data sets for use in modeling and simulation, in a collaborative environment that reaches far beyond local work groups. Comparison. Facilitate comparison between different models in order to provide the practitioners with guidance for choosing the "right" model, depending upon the intended use, and provide a platform for multi-model analysis of specific problems and incorporation into probabilistic assessments. Application. Greatly accelerate access and application of a wide range of modeling tools and related data sets to agencies around the world that are charged with hazard planning, mitigation, and response. Education. Provide resources that will promote the training of the
Hazard identification by extended multilevel flow modelling with function roles
DEFF Research Database (Denmark)
Wu, Jing; Zhang, Laibin; Jørgensen, Sten Bay
2014-01-01
HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of th e systems. In this paper, a HAZOP reasoning method based on function-oriented modelling, multilevel flow modelling (MFM...
Regional landslide hazard assessment based on Distance Evaluation Model
Institute of Scientific and Technical Information of China (English)
Jiacun LI; Yan QIN; Jing LI
2008-01-01
There are many factors influencing landslide occurrence. The key for landslide control is to confirm the regional landslide hazard factors. The Cameron Highlands of Malaysia was selected as the study area. By bivariate statistical analysis method with GIS software the authors analyzed the relationships among landslides and environmental factors such as lithology, geomorphy, elevation, road and land use. Distance Evaluation Model was developed with Landslide Density(LD). And the assessment of landslide hazard of Cameron Highlands was performed. The result shows that the model has higher prediction precision.
Agent-based Modeling with MATSim for Hazards Evacuation Planning
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
Wind Shear Modeling for Aircraft Hazard Definition.
1978-02-01
11 . Lewellen , W. S., G. G. Will iamson , and N. E . Teske . “Es tima tes of the Low Level Win d Shear and Turbulence in the Vicinity of Kennedy...E. Teske . “Model Predictions of Wind and Turbines Profiles Associated wi th an Ensemble of Aircraf t Accidents ,” NASA CR-2884, July 1977. 37 2—21
Toward Building a New Seismic Hazard Model for Mainland China
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Analysis of time to event outcomes in randomized controlled trials by generalized additive models.
Directory of Open Access Journals (Sweden)
Christos Argyropoulos
Full Text Available Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and
Identifying confounders using additive noise models
Janzing, Dominik; Mooij, Joris; Schoelkopf, Bernhard
2012-01-01
We propose a method for inferring the existence of a latent common cause ('confounder') of two observed random variables. The method assumes that the two effects of the confounder are (possibly nonlinear) functions of the confounder plus independent, additive noise. We discuss under which conditions the model is identifiable (up to an arbitrary reparameterization of the confounder) from the joint distribution of the effects. We state and prove a theoretical result that provides evidence for the conjecture that the model is generically identifiable under suitable technical conditions. In addition, we propose a practical method to estimate the confounder from a finite i.i.d. sample of the effects and illustrate that the method works well on both simulated and real-world data.
Hazard Response Modeling Uncertainty (A Quantitative Method)
1988-10-01
ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be
The Framework of a Coastal Hazards Model - A Tool for Predicting the Impact of Severe Storms
Barnard, Patrick L.; O'Reilly, Bill; van Ormondt, Maarten; Elias, Edwin; Ruggiero, Peter; Erikson, Li H.; Hapke, Cheryl; Collins, Brian D.; Guza, Robert T.; Adams, Peter N.; Thomas, Julie
2009-01-01
The U.S. Geological Survey (USGS) Multi-Hazards Demonstration Project in Southern California (Jones and others, 2007) is a five-year project (FY2007-FY2011) integrating multiple USGS research activities with the needs of external partners, such as emergency managers and land-use planners, to produce products and information that can be used to create more disaster-resilient communities. The hazards being evaluated include earthquakes, landslides, floods, tsunamis, wildfires, and coastal hazards. For the Coastal Hazards Task of the Multi-Hazards Demonstration Project in Southern California, the USGS is leading the development of a modeling system for forecasting the impact of winter storms threatening the entire Southern California shoreline from Pt. Conception to the Mexican border. The modeling system, run in real-time or with prescribed scenarios, will incorporate atmospheric information (that is, wind and pressure fields) with a suite of state-of-the-art physical process models (that is, tide, surge, and wave) to enable detailed prediction of currents, wave height, wave runup, and total water levels. Additional research-grade predictions of coastal flooding, inundation, erosion, and cliff failure will also be performed. Initial model testing, performance evaluation, and product development will be focused on a severe winter-storm scenario developed in collaboration with the Winter Storm Working Group of the USGS Multi-Hazards Demonstration Project in Southern California. Additional offline model runs and products will include coastal-hazard hindcasts of selected historical winter storms, as well as additional severe winter-storm simulations based on statistical analyses of historical wave and water-level data. The coastal-hazards model design will also be appropriate for simulating the impact of storms under various sea level rise and climate-change scenarios. The operational capabilities of this modeling system are designed to provide emergency planners with
Application of remote sensed precipitation for landslide hazard assessment models
Kirschbaum, D. B.; Peters-Lidard, C. D.; Adler, R. F.; Kumar, S.; Harrison, K.
2010-12-01
The increasing availability of remotely sensed land surface and precipitation information provides new opportunities to improve upon existing landslide hazard assessment methods. This research considers how satellite precipitation information can be applied in two types of landslide hazard assessment frameworks: a global, landslide forecasting framework and a deterministic slope-stability model. Examination of both landslide hazard frameworks points to the need for higher resolution spatial and temporal precipitation inputs to better identify small-scale precipitation forcings that contribute to significant landslide triggering. This research considers how satellite precipitation information may be downscaled to account for local orographic impacts and better resolve peak intensities. Precipitation downscaling is employed in both models to better approximate local rainfall distribution, antecedent conditions, and intensities. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale and have the potential to significantly advance landslide hazard assessment tools. The first landslide forecasting tool, running in near real-time at http://trmm.gsfc.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. Results of the algorithm evaluation indicate that considering higher resolution susceptibility information is a key factor in better resolving potentially hazardous areas. However, success in resolving when landslide activity is probable is closely linked to appropriate characterization of the empirical rainfall intensity-duration thresholds. We test a variety of rainfall thresholds to evaluate algorithmic performance accuracy and determine the optimal set of conditions that
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Rockfall hazard analysis using LiDAR and spatial modeling
Lan, Hengxing; Martin, C. Derek; Zhou, Chenghu; Lim, Chang Ho
2010-05-01
Rockfalls have been significant geohazards along the Canadian Class 1 Railways (CN Rail and CP Rail) since their construction in the late 1800s. These rockfalls cause damage to infrastructure, interruption of business, and environmental impacts, and their occurrence varies both spatially and temporally. The proactive management of these rockfall hazards requires enabling technologies. This paper discusses a hazard assessment strategy for rockfalls along a section of a Canadian railway using LiDAR and spatial modeling. LiDAR provides accurate topographical information of the source area of rockfalls and along their paths. Spatial modeling was conducted using Rockfall Analyst, a three dimensional extension to GIS, to determine the characteristics of the rockfalls in terms of travel distance, velocity and energy. Historical rockfall records were used to calibrate the physical characteristics of the rockfall processes. The results based on a high-resolution digital elevation model from a LiDAR dataset were compared with those based on a coarse digital elevation model. A comprehensive methodology for rockfall hazard assessment is proposed which takes into account the characteristics of source areas, the physical processes of rockfalls and the spatial attribution of their frequency and energy.
Random weighting method for Cox’s proportional hazards model
Institute of Scientific and Technical Information of China (English)
2008-01-01
Variance of parameter estimate in Cox’s proportional hazards model is based on asymptotic variance. When sample size is small, variance can be estimated by bootstrap method. However, if censoring rate in a survival data set is high, bootstrap method may fail to work properly. This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations. This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model. This method, unlike the bootstrap method, does not lead to more severe censoring than the original sample does. Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions. Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.
Random weighting method for Cox's proportional hazards model
Institute of Scientific and Technical Information of China (English)
CUI WenQuan; LI Kai; YANG YaNing; WU YueHua
2008-01-01
Variance of parameter estimate in Cox's proportional hazards model is based on asymptotic variance.When sample size is small,variance can be estimated by bootstrap method.However,if censoring rate in a survival data set is high,bootstrap method may fail to work properly.This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations.This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model.This method,unlike the bootstrap method,does not lead to more severe censoring than the original sample does.Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions.Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.
Defaultable Game Options in a Hazard Process Model
Directory of Open Access Journals (Sweden)
Tomasz R. Bielecki
2009-01-01
Full Text Available The valuation and hedging of defaultable game options is studied in a hazard process model of credit risk. A convenient pricing formula with respect to a reference filteration is derived. A connection of arbitrage prices with a suitable notion of hedging is obtained. The main result shows that the arbitrage prices are the minimal superhedging prices with sigma martingale cost under a risk neutral measure.
CREATION OF THE MODEL ADDITIONAL PROTOCOL
Energy Technology Data Exchange (ETDEWEB)
Houck, F.; Rosenthal, M.; Wulf, N.
2010-05-25
In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.
Bajo Sanchez, Jorge V.
This dissertation is composed of an introductory chapter and three papers about vulnerability and volcanic hazard maps with emphasis on lahars. The introductory chapter reviews definitions of the term vulnerability by the social and natural hazard community and it provides a new definition of hazard vulnerability that includes social and natural hazard factors. The first paper explains how the Community Volcanic Hazard Map (CVHM) is used for vulnerability analysis and explains in detail a new methodology to obtain valuable information about ethnophysiographic differences, hazards, and landscape knowledge of communities in the area of interest: the Canton Buenos Aires situated on the northern flank of the Santa Ana (Ilamatepec) Volcano, El Salvador. The second paper is about creating a lahar hazard map in data poor environments by generating a landslide inventory and obtaining potential volumes of dry material that can potentially be carried by lahars. The third paper introduces an innovative lahar hazard map integrating information generated by the previous two papers. It shows the differences in hazard maps created by the communities and experts both visually as well as quantitatively. This new, integrated hazard map was presented to the community with positive feedback and acceptance. The dissertation concludes with a summary chapter on the results and recommendations.
Mathematical-statistical models of generated hazardous hospital solid waste.
Awad, A R; Obeidat, M; Al-Shareef, M
2004-01-01
This research work was carried out under the assumption that wastes generated from hospitals in Irbid, Jordan were hazardous. The hazardous and non-hazardous wastes generated from the different divisions in the three hospitals under consideration were not separated during collection process. Three hospitals, Princess Basma hospital (public), Princess Bade'ah hospital (teaching), and Ibn Al-Nafis hospital (private) in Irbid were selected for this study. The research work took into account the amounts of solid waste accumulated from each division and also determined the total amount generated from each hospital. The generation rates were determined (kilogram per patient, per day; kilogram per bed, per day) for the three hospitals. These generation rates were compared with similar hospitals in Europe. The evaluation suggested that the current situation regarding the management of these wastes in the three studied hospitals needs revision as these hospitals do not follow methods of waste disposals that would reduce risk to human health and the environment practiced in developed countries. Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching, private). In these models number of patients, beds, and type of hospital were revealed to be significant factors on quantity of waste generated. Multiple regressions were also used to estimate the quantities of wastes generated from similar divisions in the three hospitals (surgery, internal diseases, and maternity).
Development of hazard-compatible building fragility and vulnerability models
Karaca, E.; Luco, N.
2008-01-01
We present a methodology for transforming the structural and non-structural fragility functions in HAZUS into a format that is compatible with conventional seismic hazard analysis information. The methodology makes use of the building capacity (or pushover) curves and related building parameters provided in HAZUS. Instead of the capacity spectrum method applied in HAZUS, building response is estimated by inelastic response history analysis of corresponding single-degree-of-freedom systems under a large number of earthquake records. Statistics of the building response are used with the damage state definitions from HAZUS to derive fragility models conditioned on spectral acceleration values. Using the developed fragility models for structural and nonstructural building components, with corresponding damage state loss ratios from HAZUS, we also derive building vulnerability models relating spectral acceleration to repair costs. Whereas in HAZUS the structural and nonstructural damage states are treated as if they are independent, our vulnerability models are derived assuming "complete" nonstructural damage whenever the structural damage state is complete. We show the effects of considering this dependence on the final vulnerability models. The use of spectral acceleration (at selected vibration periods) as the ground motion intensity parameter, coupled with the careful treatment of uncertainty, makes the new fragility and vulnerability models compatible with conventional seismic hazard curves and hence useful for extensions to probabilistic damage and loss assessment.
76 FR 5370 - Potential Addition of Vapor Intrusion Component to the Hazard Ranking System
2011-01-31
... vapor intrusion contamination to be evaluated for placement on the NPL. EPA is accepting public feedback.... Listening Session: Oral and written comments on the topics in the SUPPLEMENTARY INFORMATION section of this... Notice to allow interested parties to present feedback on the potential HRS addition. EPA welcomes...
On penalized likelihood estimation for a non-proportional hazards regression model.
Devarajan, Karthik; Ebrahimi, Nader
2013-07-01
In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.
Mark-specific hazard ratio model with missing multivariate marks.
Juraska, Michal; Gilbert, Peter B
2016-10-01
An objective of randomized placebo-controlled preventive HIV vaccine efficacy (VE) trials is to assess the relationship between vaccine effects to prevent HIV acquisition and continuous genetic distances of the exposing HIVs to multiple HIV strains represented in the vaccine. The set of genetic distances, only observed in failures, is collectively termed the 'mark.' The objective has motivated a recent study of a multivariate mark-specific hazard ratio model in the competing risks failure time analysis framework. Marks of interest, however, are commonly subject to substantial missingness, largely due to rapid post-acquisition viral evolution. In this article, we investigate the mark-specific hazard ratio model with missing multivariate marks and develop two inferential procedures based on (i) inverse probability weighting (IPW) of the complete cases, and (ii) augmentation of the IPW estimating functions by leveraging auxiliary data predictive of the mark. Asymptotic properties and finite-sample performance of the inferential procedures are presented. This research also provides general inferential methods for semiparametric density ratio/biased sampling models with missing data. We apply the developed procedures to data from the HVTN 502 'Step' HIV VE trial.
New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.
Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C
2015-09-08
Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making.
Regularization for Cox's Proportional Hazards Model With NP-Dimensionality
Bradic, Jelena; Jiang, Jiancheng
2010-01-01
High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for {\\it non-polynomial} (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically ...
Jackknifed random weighting for Cox proportional hazards model
Institute of Scientific and Technical Information of China (English)
LI Xiao; WU YaoHua; TU DongSheng
2012-01-01
The Cox proportional hazards model is the most used statistical model in the analysis of survival time data.Recently,a random weighting method was proposed to approximate the distribution of the maximum partial likelihood estimate for the regression coefficient in the Cox model.This method was shown not as sensitive to heavy censoring as the bootstrap method in simulation studies but it may not be second-order accurate as was shown for the bootstrap approximation.In this paper,we propose an alternative random weighting method based on one-step linear jackknife pseudo values and prove the second accuracy of the proposed method.Monte Carlo simulations are also performed to evaluate the proposed method for fixed sample sizes.
Model averaging for semiparametric additive partial linear models
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.
Hazard based models for freeway traffic incident duration.
Tavassoli Hojati, Ahmad; Ferreira, Luis; Washington, Simon; Charles, Phil
2013-03-01
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Beyond Flood Hazard Maps: Detailed Flood Characterization with Remote Sensing, GIS and 2d Modelling
Santillan, J. R.; Marqueso, J. T.; Makinano-Santillan, M.; Serviano, J. L.
2016-09-01
Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS) and Geographic Information System (GIS) are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D) flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers) that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.
BEYOND FLOOD HAZARD MAPS: DETAILED FLOOD CHARACTERIZATION WITH REMOTE SENSING, GIS AND 2D MODELLING
Directory of Open Access Journals (Sweden)
J. R. Santillan
2016-09-01
Full Text Available Flooding is considered to be one of the most destructive among many natural disasters such that understanding floods and assessing the risks associated to it are becoming more important nowadays. In the Philippines, Remote Sensing (RS and Geographic Information System (GIS are two main technologies used in the nationwide modelling and mapping of flood hazards. Although the currently available high resolution flood hazard maps have become very valuable, their use for flood preparedness and mitigation can be maximized by enhancing the layers of information these maps portrays. In this paper, we present an approach based on RS, GIS and two-dimensional (2D flood modelling to generate new flood layers (in addition to the usual flood depths and hazard layers that are also very useful in flood disaster management such as flood arrival times, flood velocities, flood duration, flood recession times, and the percentage within a given flood event period a particular location is inundated. The availability of these new layers of flood information are crucial for better decision making before, during, and after occurrence of a flood disaster. The generation of these new flood characteristic layers is illustrated using the Cabadbaran River Basin in Mindanao, Philippines as case study area. It is envisioned that these detailed maps can be considered as additional inputs in flood disaster risk reduction and management in the Philippines.
A DNA based model for addition computation
Institute of Scientific and Technical Information of China (English)
GAO Lin; YANG Xiao; LIU Wenbin; XU Jin
2004-01-01
Much effort has been made to solve computing problems by using DNA-an organic simulating method, which in some cases is preferable to the current electronic computer. However, No one at present has proposed an effective and applicable method to solve addition problem with molecular algorithm due to the difficulty in solving the carry problem which can be easily solved by hardware of an electronic computer. In this article, we solved this problem by employing two kinds of DNA strings, one is called result and operation string while the other is named carrier. The result and operation string contains some carry information by its own and denotes the ultimate result while the carrier is just for carrying use. The significance of this algorithm is the original code, the fairly easy steps to follow and the feasibility under current molecular biological technology.
Opinion:the use of natural hazard modeling for decision making under uncertainty
Institute of Scientific and Technical Information of China (English)
David E Calkin; Mike Mentis
2015-01-01
Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex environmental models. However, to our knowledge there has been less focus on the conditions where decision makers can confidently rely on results from these models. In this review we propose a preliminary set of conditions necessary for the appropriate application of modeled results to natural hazard decision making and provide relevant examples within US wildfire management programs.
Energy Technology Data Exchange (ETDEWEB)
Brown, Nathanael J. K.; Gearhart, Jared Lee; Jones, Dean A.; Nozick, Linda Karen; Prince, Michael
2013-09-01
Currently, much of protection planning is conducted separately for each infrastructure and hazard. Limited funding requires a balance of expenditures between terrorism and natural hazards based on potential impacts. This report documents the results of a Laboratory Directed Research & Development (LDRD) project that created a modeling framework for investment planning in interdependent infrastructures focused on multiple hazards, including terrorism. To develop this framework, three modeling elements were integrated: natural hazards, terrorism, and interdependent infrastructures. For natural hazards, a methodology was created for specifying events consistent with regional hazards. For terrorism, we modeled the terrorists actions based on assumptions regarding their knowledge, goals, and target identification strategy. For infrastructures, we focused on predicting post-event performance due to specific terrorist attacks and natural hazard events, tempered by appropriate infrastructure investments. We demonstrate the utility of this framework with various examples, including protection of electric power, roadway, and hospital networks.
Preliminary deformation model for National Seismic Hazard map of Indonesia
Energy Technology Data Exchange (ETDEWEB)
Meilano, Irwan; Gunawan, Endra; Sarsito, Dina; Prijatna, Kosasih; Abidin, Hasanuddin Z. [Geodesy Research Division, Faculty of Earth Science and Technology, Institute of Technology Bandung (Indonesia); Susilo,; Efendi, Joni [Agency for Geospatial Information (BIG) (Indonesia)
2015-04-24
Preliminary deformation model for the Indonesia’s National Seismic Hazard (NSH) map is constructed as the block rotation and strain accumulation function at the elastic half-space. Deformation due to rigid body motion is estimated by rotating six tectonic blocks in Indonesia. The interseismic deformation due to subduction is estimated by assuming coupling on subduction interface while deformation at active fault is calculated by assuming each of the fault‘s segment slips beneath a locking depth or in combination with creeping in a shallower part. This research shows that rigid body motion dominates the deformation pattern with magnitude more than 15 mm/year, except in the narrow area near subduction zones and active faults where significant deformation reach to 25 mm/year.
Optimization of maintenance policy using the proportional hazard model
Energy Technology Data Exchange (ETDEWEB)
Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)
2009-01-15
The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Hydraulic modeling for lahar hazards at cascades volcanoes
Costa, J.E.
1997-01-01
The National Weather Service flood routing model DAMBRK is able to closely replicate field-documented stages of historic and prehistoric lahars from Mt. Rainier, Washington, and Mt. Hood, Oregon. Modeled time-of-travel of flow waves are generally consistent with documented lahar travel-times from other volcanoes around the world. The model adequately replicates a range of lahars and debris flows, including the 230 million km3 Electron lahar from Mt. Rainier, as well as a 10 m3 debris flow generated in a large outdoor experimental flume. The model is used to simulate a hypothetical lahar with a volume of 50 million m3 down the East Fork Hood River from Mt. Hood, Oregon. Although a flow such as this is thought to be possible in the Hood River valley, no field evidence exists on which to base a hazards assessment. DAMBRK seems likely to be usable in many volcanic settings to estimate discharge, velocity, and inundation areas of lahars when input hydrographs and energy-loss coefficients can be reasonably estimated.
Institute of Scientific and Technical Information of China (English)
XU Jing; YANG Chi; ZHANG Guoping
2007-01-01
Information model is adopted to integrate factors of various geosciences to estimate the susceptibility of geological hazards. Further combining the dynamic rainfall observations, Logistic regression is used for modeling the probabilities of geological hazard occurrences, upon which hierarchical warnings for rainfall-induced geological hazards are produced. The forecasting and warning model takes numerical precipitation forecasts on grid points as its dynamic input, forecasts the probabilities of geological hazard occurrences on the same grid, and translates the results into likelihoods in the form of a 5-level hierarchy. Validation of the model with observational data for the year 2004 shows that 80% of the geological hazards of the year have been identified as "likely enough to release warning messages". The model can satisfy the requirements of an operational warning system, thus is an effective way to improve the meteorological warnings for geological hazards.
Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios
DEFF Research Database (Denmark)
Custer, Rocco; Nishijima, Kazuyoshi
In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... and the relevance to natural hazard risk assessment is illustrated....
Seismic hazard assessment of Sub-Saharan Africa using geodetic strain rate models
Poggi, Valerio; Pagani, Marco; Weatherill, Graeme; Garcia, Julio; Durrheim, Raymond J.; Mavonga Tuluka, Georges
2016-04-01
The East African Rift System (EARS) is the major active tectonic feature of the Sub-Saharan Africa (SSA) region. Although the seismicity level of such a divergent plate boundary can be described as moderate, several earthquakes have been reported in historical times causing a non-negligible level of damage, albeit mostly due to the high vulnerability of the local buildings and structures. Formulation and enforcement of national seismic codes is therefore an essential future risk mitigation strategy. Nonetheless, a reliable risk assessment cannot be done without the calibration of an updated seismic hazard model for the region. Unfortunately, the major issue in assessing seismic hazard in Sub-Saharan Africa is the lack of basic information needed to construct source and ground motion models. The historical earthquake record is largely incomplete, while instrumental catalogue is complete down to sufficient magnitude only for a relatively short time span. In addition, mapping of seimogenically active faults is still an on-going program. Recent studies have identified major seismogenic lineaments, but there is substantial lack of kinematic information for intermediate-to-small scale tectonic features, information that is essential for the proper calibration of earthquake recurrence models. To compensate this lack of information, we experiment the use of a strain rate model recently developed by Stamps et al. (2015) in the framework of a earthquake hazard and risk project along the EARS supported by USAID and jointly carried out by GEM and AfricaArray. We use the inferred geodetic strain rates to derive estimates of total scalar moment release, subsequently used to constrain earthquake recurrence relationships for both area (as distributed seismicity) and fault source models. The rates obtained indirectly from strain rates and more classically derived from the available seismic catalogues are then compared and combined into a unique mixed earthquake recurrence model
Richardson, David B; Laurier, Dominique; Schubauer-Berigan, Mary K; Tchetgen Tchetgen, Eric; Cole, Stephen R
2014-11-01
Workers' smoking histories are not measured in many occupational cohort studies. Here we discuss the use of negative control outcomes to detect and adjust for confounding in analyses that lack information on smoking. We clarify the assumptions necessary to detect confounding by smoking and the additional assumptions necessary to indirectly adjust for such bias. We illustrate these methods using data from 2 studies of radiation and lung cancer: the Colorado Plateau cohort study (1950-2005) of underground uranium miners (in which smoking was measured) and a French cohort study (1950-2004) of nuclear industry workers (in which smoking was unmeasured). A cause-specific relative hazards model is proposed for estimation of indirectly adjusted associations. Among the miners, the proposed method suggests no confounding by smoking of the association between radon and lung cancer--a conclusion supported by adjustment for measured smoking. Among the nuclear workers, the proposed method suggests substantial confounding by smoking of the association between radiation and lung cancer. Indirect adjustment for confounding by smoking resulted in an 18% decrease in the adjusted estimated hazard ratio, yet this cannot be verified because smoking was unmeasured. Assumptions underlying this method are described, and a cause-specific proportional hazards model that allows easy implementation using standard software is presented.
A median voter model of health insurance with ex post moral hazard.
Jacob, Johanna; Lundin, Douglas
2005-03-01
One of the main features of health insurance is moral hazard, as defined by Pauly [Pauly, M.V., 1968. The economics of moral hazard: comment. American Economic Review 58, 531-537), people face incentives for excess utilization of medical care since they do not pay the full marginal cost for provision. To mitigate the moral hazard problem, a coinsurance can be included in the insurance contract. But health insurance is often publicly provided. Having a uniform coinsurance rate determined in a political process is quite different from having different rates varying in accordance with one's preferences, as is possible with private insurance. We construct a political economy model in order to characterize the political equilibrium and answer questions like: "Under what conditions is there a conflict in society on what coinsurance rate should be set?" and "Which groups of individuals will vote for a higher and lower than equilibrium coinsurance rate, respectively?". We also extend our basic model and allow people to supplement the coverage provided by the government with private insurance. Then, we answer two questions: "Who will buy the additional coverage?" and "How do the coinsurance rates people are now faced with compare with the rates chosen with pure private provision?".
Multiple Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California
Pike, Richard J.; Graymer, Russell W.
2008-01-01
With the exception of Los Angeles, perhaps no urban area in the United States is more at risk from landsliding, triggered by either precipitation or earthquake, than the San Francisco Bay region of northern California. By January each year, seasonal winter storms usually bring moisture levels of San Francisco Bay region hillsides to the point of saturation, after which additional heavy rainfall may induce landslides of various types and levels of severity. In addition, movement at any time along one of several active faults in the area may generate an earthquake large enough to trigger landslides. The danger to life and property rises each year as local populations continue to expand and more hillsides are graded for development of residential housing and its supporting infrastructure. The chapters in the text consist of: *Introduction by Russell W. Graymer *Chapter 1 Rainfall Thresholds for Landslide Activity, San Francisco Bay Region, Northern California by Raymond C. Wilson *Chapter 2 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike and Steven Sobieszczyk *Chapter 3 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven Sobieszczyk *Chapter 4 Landslide Hazard Modeled for the Cities of Oakland, Piedmont, and Berkeley, Northern California, from a M=7.1 Scenario Earthquake on the Hayward Fault Zone by Scott B. Miles and David K. Keefer *Chapter 5 Synthesis of Landslide-Hazard Scenarios Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike The plates consist of: *Plate 1 Susceptibility to Deep-Seated Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Richard J. Pike, Russell W. Graymer, Sebastian Roberts, Naomi B. Kalman, and Steven Sobieszczyk *Plate 2 Susceptibility to Shallow Landsliding Modeled for the Oakland-Berkeley Area, Northern California by Kevin M. Schmidt and Steven
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Energy Technology Data Exchange (ETDEWEB)
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
On Model Specification and Selection of the Cox Proportional Hazards Model*
Lin, Chen-Yen; Halabi, Susan
2013-01-01
Prognosis plays a pivotal role in patient management and trial design. A useful prognostic model should correctly identify important risk factors and estimate their effects. In this article, we discuss several challenges in selecting prognostic factors and estimating their effects using the Cox proportional hazards model. Although a flexible semiparametric form, the Cox’s model is not entirely exempt from model misspecification. To minimize possible misspecification, instead of imposing tradi...
Global Hydrological Hazard Evaluation System (Global BTOP) Using Distributed Hydrological Model
Gusyev, M.; Magome, J.; Hasegawa, A.; Takeuchi, K.
2015-12-01
A global hydrological hazard evaluation system based on the BTOP models (Global BTOP) is introduced and quantifies flood and drought hazards with simulated river discharges globally for historical, near real-time monitoring and climate change impact studies. The BTOP model utilizes a modified topographic index concept and simulates rainfall-runoff processes including snowmelt, overland flow, soil moisture in the root and unsaturated zones, sub-surface flow, and river flow routing. The current global BTOP is constructed from global data on 10-min grid and is available to conduct river basin analysis on local, regional, and global scale. To reduce the impact of a coarse resolution, topographical features of global BTOP were obtained using river network upscaling algorithm that preserves fine resolution characteristics of 3-arcsec HydroSHEDS and 30-arcsec Hydro1K datasets. In addition, GLCC-IGBP land cover (USGS) and the DSMW(FAO) were used for the root zone depth and soil properties, respectively. The long-term seasonal potential evapotranspiration within BTOP model was estimated by the Shuttleworth-Wallace model using climate forcing data CRU TS3.1 and a GIMMS-NDVI(UMD/GLCF). The global BTOP was run with globally available precipitation such APHRODITE dataset and showed a good statistical performance compared to the global and local river discharge data in the major river basins. From these simulated daily river discharges at each grid, the flood peak discharges of selected return periods were obtained using the Gumbel distribution with L-moments and the hydrological drought hazard was quantified using standardized runoff index (SRI). For the dynamic (near real-time) applications, the global BTOP model is run with GSMaP-NRT global precipitation and simulated daily river discharges are utilized in a prototype near-real time discharge simulation system (GFAS-Streamflow), which is used to issue flood peak discharge alerts globally. The global BTOP system and GFAS
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and
Use of agent-based modelling in emergency management under a range of flood hazards
Directory of Open Access Journals (Sweden)
Tagg Andrew
2016-01-01
Full Text Available The Life Safety Model (LSM was developed some 15 years ago, originally for dam break assessments and for informing reservoir evacuation and emergency plans. Alongside other technological developments, the model has evolved into a very useful agent-based tool, with many applications for a range of hazards and receptor behaviour. HR Wallingford became involved in its use in 2006, and is now responsible for its technical development and commercialisation. Over the past 10 years the model has been applied to a range of flood hazards, including coastal surge, river flood, dam failure and tsunami, and has been verified against historical events. Commercial software licences are being used in Canada, Italy, Malaysia and Australia. A core group of LSM users and analysts has been specifying and delivering a programme of model enhancements. These include improvements to traffic behaviour at intersections, new algorithms for sheltering in high-rise buildings, and the addition of monitoring points to allow detailed analysis of vehicle and pedestrian movement. Following user feedback, the ability of LSM to handle large model ‘worlds’ and hydrodynamic meshes has been improved. Recent developments include new documentation, performance enhancements, better logging of run-time events and bug fixes. This paper describes some of the recent developments and summarises some of the case study applications, including dam failure analysis in Japan and mass evacuation simulation in England.
Hidden Markov models for estimating animal mortality from anthropogenic hazards
Carcasses searches are a common method for studying the risk of anthropogenic hazards to wildlife, including non-target poisoning and collisions with anthropogenic structures. Typically, numbers of carcasses found must be corrected for scavenging rates and imperfect detection. ...
Modelling Inland Flood Events for Hazard Maps in Taiwan
Ghosh, S.; Nzerem, K.; Sassi, M.; Hilberts, A.; Assteerawatt, A.; Tillmanns, S.; Mathur, P.; Mitas, C.; Rafique, F.
2015-12-01
Taiwan experiences significant inland flooding, driven by torrential rainfall from plum rain storms and typhoons during summer and fall. From last 13 to 16 years data, 3,000 buildings were damaged by such floods annually with a loss US$0.41 billion (Water Resources Agency). This long, narrow island nation with mostly hilly/mountainous topography is located at tropical-subtropical zone with annual average typhoon-hit-frequency of 3-4 (Central Weather Bureau) and annual average precipitation of 2502mm (WRA) - 2.5 times of the world's average. Spatial and temporal distributions of countrywide precipitation are uneven, with very high local extreme rainfall intensities. Annual average precipitation is 3000-5000mm in the mountainous regions, 78% of it falls in May-October, and the 1-hour to 3-day maximum rainfall are about 85 to 93% of the world records (WRA). Rivers in Taiwan are short with small upstream areas and high runoff coefficients of watersheds. These rivers have the steepest slopes, the shortest response time with rapid flows, and the largest peak flows as well as specific flood peak discharge (WRA) in the world. RMS has recently developed a countrywide inland flood model for Taiwan, producing hazard return period maps at 1arcsec grid resolution. These can be the basis for evaluating and managing flood risk, its economic impacts, and insured flood losses. The model is initiated with sub-daily historical meteorological forcings and calibrated to daily discharge observations at about 50 river gauges over the period 2003-2013. Simulations of hydrologic processes, via rainfall-runoff and routing models, are subsequently performed based on a 10000 year set of stochastic forcing. The rainfall-runoff model is physically based continuous, semi-distributed model for catchment hydrology. The 1-D wave propagation hydraulic model considers catchment runoff in routing and describes large-scale transport processes along the river. It also accounts for reservoir storage
Conceptual geoinformation model of natural hazards risk assessment
Kulygin, Valerii
2016-04-01
Natural hazards are the major threat to safe interactions between nature and society. The assessment of the natural hazards impacts and their consequences is important in spatial planning and resource management. Today there is a challenge to advance our understanding of how socio-economical and climate changes will affect the frequency and magnitude of hydro-meteorological hazards and associated risks. However, the impacts from different types of natural hazards on various marine and coastal economic activities are not of the same type. In this study, the conceptual geomodel of risk assessment is presented to highlight the differentiation by the type of economic activities in extreme events risk assessment. The marine and coastal ecosystems are considered as the objects of management, on the one hand, and as the place of natural hazards' origin, on the other hand. One of the key elements in describing of such systems is the spatial characterization of their components. Assessment of ecosystem state is based on ecosystem indicators (indexes). They are used to identify the changes in time. The scenario approach is utilized to account for the spatio-temporal dynamics and uncertainty factors. Two types of scenarios are considered: scenarios of using ecosystem services by economic activities and scenarios of extreme events and related hazards. The reported study was funded by RFBR, according to the research project No. 16-35-60043 mol_a_dk.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
J. Blahut; P. Horton; S. Sterlacchini; Jaboyedoff, M.
2010-01-01
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of th...
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes
The influence of mapped hazards on risk beliefs: a proximity-based modeling approach.
Severtson, Dolores J; Burt, James E
2012-02-01
Interview findings suggest perceived proximity to mapped hazards influences risk beliefs when people view environmental hazard maps. For dot maps, four attributes of mapped hazards influenced beliefs: hazard value, proximity, prevalence, and dot patterns. In order to quantify the collective influence of these attributes for viewers' perceived or actual map locations, we present a model to estimate proximity-based hazard or risk (PBH) and share study results that indicate how modeled PBH and map attributes influenced risk beliefs. The randomized survey study among 447 university students assessed risk beliefs for 24 dot maps that systematically varied by the four attributes. Maps depicted water test results for a fictitious hazardous substance in private residential wells and included a designated "you live here" location. Of the nine variables that assessed risk beliefs, the numerical susceptibility variable was most consistently and strongly related to map attributes and PBH. Hazard value, location in or out of a clustered dot pattern, and distance had the largest effects on susceptibility. Sometimes, hazard value interacted with other attributes, for example, distance had stronger effects on susceptibility for larger than smaller hazard values. For all combined maps, PBH explained about the same amount of variance in susceptibility as did attributes. Modeled PBH may have utility for studying the influence of proximity to mapped hazards on risk beliefs, protective behavior, and other dependent variables. Further work is needed to examine these influences for more realistic maps and representative study samples.
DEVELOPMENT OF A CRASH RISK PROBABILITY MODEL FOR FREEWAYS BASED ON HAZARD PREDICTION INDEX
Directory of Open Access Journals (Sweden)
Md. Mahmud Hasan
2014-12-01
Full Text Available This study presents a method for the identification of hazardous situations on the freeways. The hazard identification is done using a crash risk probability model. For this study, about 18 km long section of Eastern Freeway in Melbourne (Australia is selected as a test bed. Two categories of data i.e. traffic and accident record data are used for the analysis and modelling. In developing the crash risk probability model, Hazard Prediction Index is formulated in this study by the differences of traffic parameters with threshold values. Seven different prediction indices are examined and the best one is selected as crash risk probability model based on prediction error minimisation.
Modelling the costs of natural hazards in games
Bostenaru-Dan, M.
2012-04-01
City are looked for today, including a development at the University of Torino called SimTorino, which simulates the development of the city in the next 20 years. The connection to another games genre as video games, the board games, will be investigated, since there are games on construction and reconstruction of a cathedral and its tower and a bridge in an urban environment of the middle ages based on the two novels of Ken Follett, "Pillars of the Earth" and "World Without End" and also more recent games, such as "Urban Sprawl" or the Romanian game "Habitat", dealing with the man-made hazard of demolition. A review of these games will be provided based on first hand playing experience. In games like "World without End" or "Pillars of the Earth", just like in the recently popular games of Zynga on social networks, construction management is done through providing "building" an item out of stylised materials, such as "stone", "sand" or more specific ones as "nail". Such approach could be used also for retrofitting buildings for earthquakes, in the series of "upgrade", not just for extension as it is currently in games, and this is what our research is about. "World without End" includes a natural disaster not so analysed today but which was judged by the author as the worst of manhood: the Black Death. The Black Death has effects and costs as well, not only modelled through action cards, but also on the built environment, by buildings remaining empty. On the other hand, games such as "Habitat" rely on role playing, which has been recently recognised as a way to bring games theory to decision making through the so-called contribution of drama, a way to solve conflicts through balancing instead of weighting, and thus related to Analytic Hierarchy Process. The presentation aims to also give hints on how to design a game for the problem of earthquake retrofit, translating the aims of the actors in such a process into role playing. Games are also employed in teaching of urban
Expert elicitation for a national-level volcano hazard model
Bebbington, Mark; Stirling, Mark; Cronin, Shane; Wang, Ting; Jolly, Gill
2016-04-01
The quantification of volcanic hazard at national level is a vital pre-requisite to placing volcanic risk on a platform that permits meaningful comparison with other hazards such as earthquakes. New Zealand has up to a dozen dangerous volcanoes, with the usual mixed degrees of knowledge concerning their temporal and spatial eruptive history. Information on the 'size' of the eruptions, be it in terms of VEI, volume or duration, is sketchy at best. These limitations and the need for a uniform approach lend themselves to a subjective hazard analysis via expert elicitation. Approximately 20 New Zealand volcanologists provided estimates for the size of the next eruption from each volcano and, conditional on this, its location, timing and duration. Opinions were likewise elicited from a control group of statisticians, seismologists and (geo)chemists, all of whom had at least heard the term 'volcano'. The opinions were combined via the Cooke classical method. We will report on the preliminary results from the exercise.
Process chain modeling and selection in an additive manufacturing context
DEFF Research Database (Denmark)
Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael
2016-01-01
This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...... evolving fields like additive manufacturing....
Computer models used to support cleanup decision-making at hazardous and radioactive waste sites
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, P.D.; Pardi, R.; DePhillips, M.P.; Meinhold, A.F.
1992-07-01
Massive efforts are underway to cleanup hazardous and radioactive waste sites located throughout the US To help determine cleanup priorities, computer models are being used to characterize the source, transport, fate and effects of hazardous chemicals and radioactive materials found at these sites. Although, the US Environmental Protection Agency (EPA), the US Department of Energy (DOE), and the US Nuclear Regulatory Commission (NRC) have provided preliminary guidance to promote the use of computer models for remediation purposes, no Agency has produced directed guidance on models that must be used in these efforts. To identify what models are actually being used to support decision-making at hazardous and radioactive waste sites, a project jointly funded by EPA, DOE and NRC was initiated. The purpose of this project was to: (1) Identify models being used for hazardous and radioactive waste site assessment purposes; and (2) describe and classify these models. This report presents the results of this study.
Delayed geochemical hazard: Concept, digital model and case study
Institute of Scientific and Technical Information of China (English)
CHEN Ming; FENG Liu; Jacques Yvon
2005-01-01
Delayed Geochemical Hazard (DGH briefly) presents the whole process of a kind of serious ecological and environmental hazard caused by sudden reactivation and sharp release of long-term accumulated pollutant from stable species to active ones in soil or sediment system due to the change of physical-chemical conditions (such as temperature, pH, Eh, moisture, the concentrations of organic matters, etc.) or the decrease of environment capacity. The characteristics of DGH are discussed. The process of a typical DGH can be expressed as a nonlinear polynomial. The points where the derivative functions of the first and second orders of the polynomial reach zero, minimum and maximum are keys for risk assessment and harzard pridication.The process and mechanism of the hazard is due to the transform of pollutant among different species principally. The concepts of "total releasable content of pollutant", TRCP, and "total concentration of active specie", TCAS, are necessarily defined to describe the mechanism of DGH. The possibility of the temporal and spatial propagation is discussed. Case study shows that there exists a transform mechanism of "gradual release" and "chain reaction" among the species of the exchangeable and the bounds to carbonate, iron and manganese oxides and organic matter, thus causing the delayed geochemical hazard.
A Quasi-Poisson Approach on Modeling Accident Hazard Index for Urban Road Segments
Directory of Open Access Journals (Sweden)
Lu Ma
2014-01-01
Full Text Available In light of the recently emphasized studies on risk evaluation of crashes, accident counts under specific transportation facilities are adopted to reflect the chance of crash occurrence. The current study introduces more comprehensive measure with the supplement information of accidental harmfulness into the expression of accident risks which are also named Accident Hazard Index (AHI in the following context. Before the statistical analysis, datasets from various sources are integrated under a GIS platform and the corresponding procedures are presented as an illustrated example for similar analysis. Then, a quasi-Poisson regression model is suggested for analyses and the results show that the model is appropriate for dealing with overdispersed count data and several key explanatory variables were found to have significant impact on the estimation of AHI. In addition, the effect of weight on different severity levels of accidents is examined and the selection of the weight is also discussed.
Opinion: the use of natural hazard modeling for decision making under uncertainty
Directory of Open Access Journals (Sweden)
David E Calkin
2015-04-01
Full Text Available Decision making to mitigate the effects of natural hazards is a complex undertaking fraught with uncertainty. Models to describe risks associated with natural hazards have proliferated in recent years. Concurrently, there is a growing body of work focused on developing best practices for natural hazard modeling and to create structured evaluation criteria for complex environmental models. However, to our knowledge there has been less focus on the conditions where decision makers can confidently rely on results from these models. In this review we propose a preliminary set of conditions necessary for the appropriate application of modeled results to natural hazard decision making and provide relevant examples within US wildfire management programs.
Debris flow hazard modelling on medium scale: Valtellina di Tirano, Italy
Directory of Open Access Journals (Sweden)
J. Blahut
2010-11-01
Full Text Available Debris flow hazard modelling at medium (regional scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal, and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy. The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R, developed at the University of Lausanne (Switzerland. An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.
Investigation of the Effect of Traffic Parameters on Road Hazard Using Classification Tree Model
Directory of Open Access Journals (Sweden)
Md. Mahmud Hasan
2012-09-01
Full Text Available This paper presents a method for the identification of hazardous situations on the freeways. For this study, about 18 km long section of Eastern Freeway in Melbourne, Australia was selected as a test bed. Three categories of data i.e. traffic, weather and accident record data were used for the analysis and modelling. In developing the crash risk probability model, classification tree based model was developed in this study. In formulating the models, it was found that weather conditions did not have significant impact on accident occurrence so the classification tree was built using two traffic indices; traffic flow and vehicle speed only. The formulated classification tree is able to identify the possible hazard and non-hazard situations on freeway. The outcome of the study will aid the hazard mitigation strategies.
Fitting Additive Binomial Regression Models with the R Package blm
Directory of Open Access Journals (Sweden)
Stephanie Kovalchik
2013-09-01
Full Text Available The R package blm provides functions for fitting a family of additive regression models to binary data. The included models are the binomial linear model, in which all covariates have additive effects, and the linear-expit (lexpit model, which allows some covariates to have additive effects and other covariates to have logisitc effects. Additive binomial regression is a model of event probability, and the coefficients of linear terms estimate covariate-adjusted risk differences. Thus, in contrast to logistic regression, additive binomial regression puts focus on absolute risk and risk differences. In this paper, we give an overview of the methodology we have developed to fit the binomial linear and lexpit models to binary outcomes from cohort and population-based case-control studies. We illustrate the blm packages methods for additive model estimation, diagnostics, and inference with risk association analyses of a bladder cancer nested case-control study in the NIH-AARP Diet and Health Study.
Accelerated propor tional degradation hazards-odds model in accelerated degradation test
Institute of Scientific and Technical Information of China (English)
Tingting Huang; Zhizhong Li
2015-01-01
An accelerated proportional degradation hazards-odds model is proposed. It is a non-parametric model and thus has path-free and distribution-free properties, avoiding the errors caused by faulty assumptions of degradation paths or distribution of degra-dation measurements. It is established based on a link function which combines the degradation cumulative hazard rate function and the degradation odds function through a transformation pa-rameter, and this makes the accelerated proportional degradation hazards model and the accelerated proportional degradation odds model special cases of it. Hypothesis tests are discussed, and the proposed model is applicable when some model assumptions are satisfied. This model is utilized to estimate the reliability of minia-ture bulbs under low stress levels based on the degradation data obtained under high stress levels to validate the effectiveness of this model.
An Additive-Multiplicative Restricted Mean Residual Life Model
DEFF Research Database (Denmark)
Mansourvar, Zahra; Martinussen, Torben; Scheike, Thomas H.
2016-01-01
mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline....... For the suggested model, some covariate effects are allowed to be time-varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness...... of fit test that is asymptotically justified. The proposed methodology is evaluated via simulation studies and further applied to a kidney cancer data set collected from a clinical trial....
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Additive Intensity Regression Models in Corporate Default Analysis
DEFF Research Database (Denmark)
Lando, David; Medhat, Mamdouh; Nielsen, Mads Stenbo
2013-01-01
We consider additive intensity (Aalen) models as an alternative to the multiplicative intensity (Cox) models for analyzing the default risk of a sample of rated, nonfinancial U.S. firms. The setting allows for estimating and testing the significance of time-varying effects. We use a variety of mo...
Yan, Fang; Xu, Kaili
2017-01-01
Because a biomass gasification station includes various hazard factors, hazard assessment is needed and significant. In this article, the cloud model (CM) is employed to improve set pair analysis (SPA), and a novel hazard assessment method for a biomass gasification station is proposed based on the cloud model-set pair analysis (CM-SPA). In this method, cloud weight is proposed to be the weight of index. In contrast to the index weight of other methods, cloud weight is shown by cloud descriptors; hence, the randomness and fuzziness of cloud weight will make it effective to reflect the linguistic variables of experts. Then, the cloud connection degree (CCD) is proposed to replace the connection degree (CD); the calculation algorithm of CCD is also worked out. By utilizing the CCD, the hazard assessment results are shown by some normal clouds, and the normal clouds are reflected by cloud descriptors; meanwhile, the hazard grade is confirmed by analyzing the cloud descriptors. After that, two biomass gasification stations undergo hazard assessment via CM-SPA and AHP based SPA, respectively. The comparison of assessment results illustrates that the CM-SPA is suitable and effective for the hazard assessment of a biomass gasification station and that CM-SPA will make the assessment results more reasonable and scientific. PMID:28076440
Spatio-Temporal Risk Assessment Process Modeling for Urban Hazard Events in Sensor Web Environment
Directory of Open Access Journals (Sweden)
Wei Wang
2016-11-01
Full Text Available Immediate risk assessment and analysis are crucial in managing urban hazard events (UHEs. However, it is a challenge to develop an immediate risk assessment process (RAP that can integrate distributed sensors and data to determine the uncertain model parameters of facilities, environments, and populations. To solve this problem, this paper proposes a RAP modeling method within a unified spatio-temporal framework and forms a 10-tuple process information description structure based on a Meta-Object Facility (MOF. A RAP is designed as an abstract RAP chain that collects urban information resources and performs immediate risk assessments. In addition, we propose a prototype system known as Risk Assessment Process Management (RAPM to achieve the functions of RAP modeling, management, execution and visualization. An urban gas leakage event is simulated as an example in which individual risk and social risk are used to illustrate the applicability of the RAP modeling method based on the 10-tuple metadata framework. The experimental results show that the proposed RAP immediately assesses risk by the aggregation of urban sensors, data, and model resources. Moreover, an extension mechanism is introduced in the spatio-temporal RAP modeling method to assess risk and to provide decision-making support for different UHEs.
A Remote Sensing Based Approach For Modeling and Assessing Glacier Hazards
Huggel, C.; Kääb, A.; Salzmann, N.; Haeberli, W.; Paul, F.
Glacier-related hazards such as ice avalanches and glacier lake outbursts can pose a significant threat to population and installations in high mountain regions. They are well documented in the Swiss Alps and the high data density is used to build up sys- tematic knowledge of glacier hazard locations and potentials. Experiences from long research activities thereby form an important basis for ongoing hazard monitoring and assessment. However, in the context of environmental changes in general, and the highly dynamic physical environment of glaciers in particular, historical experience may increasingly loose its significance with respect to impact zone of hazardous pro- cesses. On the other hand, in large and remote high mountains such as the Himalayas, exact information on location and potential of glacier hazards is often missing. There- fore, it is crucial to develop hazard monitoring and assessment concepts including area-wide applications. Remote sensing techniques offer a powerful tool to narrow current information gaps. The present contribution proposes an approach structured in (1) detection, (2) evaluation and (3) modeling of glacier hazards. Remote sensing data is used as the main input to (1). Algorithms taking advantage of multispectral, high-resolution data are applied for detecting glaciers and glacier lakes. Digital terrain modeling, and classification and fusion of panchromatic and multispectral satellite im- agery is performed in (2) to evaluate the hazard potential of possible hazard sources detected in (1). The locations found in (1) and (2) are used as input to (3). The models developed in (3) simulate the processes of lake outbursts and ice avalanches based on hydrological flow modeling and empirical values for average trajectory slopes. A probability-related function allows the model to indicate areas with lower and higher risk to be affected by catastrophic events. Application of the models for recent ice avalanches and lake outbursts show
Traffic Incident Clearance Time and Arrival Time Prediction Based on Hazard Models
Directory of Open Access Journals (Sweden)
Yang beibei Ji
2014-01-01
Full Text Available Accurate prediction of incident duration is not only important information of Traffic Incident Management System, but also an effective input for travel time prediction. In this paper, the hazard based prediction models are developed for both incident clearance time and arrival time. The data are obtained from the Queensland Department of Transport and Main Roads’ STREAMS Incident Management System (SIMS for one year ending in November 2010. The best fitting distributions are drawn for both clearance and arrival time for 3 types of incident: crash, stationary vehicle, and hazard. The results show that Gamma, Log-logistic, and Weibull are the best fit for crash, stationary vehicle, and hazard incident, respectively. The obvious impact factors are given for crash clearance time and arrival time. The quantitative influences for crash and hazard incident are presented for both clearance and arrival. The model accuracy is analyzed at the end.
Anderson, E. R.; Griffin, R.; Irwin, D.
2013-12-01
. Optimized pit filling techniques use both cut and fill operations to minimize modifications of the original DEM. Satellite image interpretation and field surveying provide the baseline upon which to test the accuracy of each model simulation. By outlining areas that could potentially be inundated by debris flows, these efforts can be used to more accurately identify the places and assets immediately exposed to landslide hazards. We contextualize the results of the previous and ongoing efforts into how they may be incorporated into decision support systems. We also discuss if and how these analyses would have provided additional knowledge in the past, and identify specific recommendations as to how they could contribute to a more robust decision support system in the future.
Beheshtirad, M.; Noormandipour, N.
2009-04-01
Identification of regions having potential for landslide occurrence is one of the basic measures in natural resources management which decreases damages causes by these phenomena. For this purpose different landslide hazard zonation models were proposed based on the environmental conditions and goals. In this research applicability of Haeri-samiee Landslide Hazard Zonations Model has been investigated in Moalemkalayeh watershed. For doing this, existing landslides identified and their inventory map was prepared as earthly evidence. Topographical map (1:50000) was divided into 514 cellular network as working unit. Landslide hazard zonation map provided based on H.S. model. We investigated the level of similarity potential hazard classes and figures of the two models with earthly evidence (landslide inventory map) in the SPSS and Minitab environments. Our results showed that there is a significant correlation at the 0.01 level between potential hazard classes and figures with the number of landslides, area of landslide, as well as the multiplication of the number and area of landslides in the H.S. model. Therefore H.S. model is the suitable model for Moalemkalayeh watershed.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.
2009-04-01
location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone with an adjustment of 95.1 % for ANN model and 89.4% for LR model. In addition, we make a comparative analysis of both techniques using the Receiver Operating Characteristic (ROC) curve, a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system in function of its discrimination threshold, and calculating the Area Under the ROC (AUROC) value for each model. Finally, the previous models are used for the developing a new probabilistic landslide hazard map for future events. They are obtained combining the expected triggering factor (calculated earthquake ground motion) for a return period of 475 years with the susceptibility map.
Loughlin, Susan
2013-04-01
GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. GVM is a network that aims to co-ordinate and integrate the efforts of the international volcanology community. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM. Activities currently include: design and development of databases of volcano data, volcanic hazards, vulnerability and exposure with internationally agreed metadata standards; establishment of methodologies for analysis of the data (e.g. hazard and exposure indices) to inform risk assessment; development of complementary hazards models and create relevant hazards and risk assessment tools. GVM acts through establishing task forces to deliver explicit deliverables in finite periods of time. GVM has a task force to deliver a global assessment of volcanic risk for UN ISDR, a task force for indices, and a task force for volcano deformation from satellite observations. GVM is organising a Volcano Best Practices workshop in 2013. A recent product of GVM is a global database on large magnitude explosive eruptions. There is ongoing work to develop databases on debris avalanches, lava dome hazards and ash hazard. GVM aims to develop the capability to anticipate future volcanism and its consequences.
Snakes as hazards: modelling risk by chasing chimpanzees.
McGrew, William C
2015-04-01
Snakes are presumed to be hazards to primates, including humans, by the snake detection hypothesis (Isbell in J Hum Evol 51:1-35, 2006; Isbell, The fruit, the tree, and the serpent. Why we see so well, 2009). Quantitative, systematic data to test this idea are lacking for the behavioural ecology of living great apes and human foragers. An alternative proxy is snakes encountered by primatologists seeking, tracking, and observing wild chimpanzees. We present 4 years of such data from Mt. Assirik, Senegal. We encountered 14 species of snakes a total of 142 times. Almost two-thirds of encounters were with venomous snakes. Encounters occurred most often in forest and least often in grassland, and more often in the dry season. The hypothesis seems to be supported, if frequency of encounter reflects selective risk of morbidity or mortality.
Teamwork tools and activities within the hazard component of the Global Earthquake Model
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Using Set Model for Learning Addition of Integers
Directory of Open Access Journals (Sweden)
Umi Puji Lestari
2015-07-01
Full Text Available This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the use of set models that is packaged in activity of recording of financial transactions in two color chips and card game can help students to understand the concept of zero pair, addition with the same colored chips, and cancellation strategy.
Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes
Hehr, Adam; Dapino, Marcelo J.
2016-04-01
Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.
AN INSTRUCTURAL SYSTEM MODEL OF COASTAL MANAGEMENT TO THE WATER RELATED HAZARDS IN CHINA
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Coastal lowlands have large areas of hazard impact and relativelylow capacity of prevention to the water related hazards,which have been indicated by the wide-spread flood hazards,high percentages of land with high flood vulnerability.Increasing population pressure and the shift of resources exploitation from land to sea will force more and more coastal lowlands to be developed in the future,further enhancing the danger of water-related hazards.In this paper,the coastal lowlands in the northern Jiangsu province,China,were selected as a case study.The Interpretation Structural Model (ISM) was employed to analyze the direct and indirect impacts among the elements within the system,and thereby,to identify the causal elements,middle linkages,their expressions,and relations.
Modelling tropical cyclone hazards under climate change scenario using geospatial techniques
Hoque, M. A.; Phinn, S.; Roelfsema, C.; Childs, I.
2016-11-01
Tropical cyclones are a common and devastating natural disaster in many coastal areas of the world. As the intensity and frequency of cyclones will increase under the most likely future climate change scenarios, appropriate approaches at local scales (1-5 km) are essential for producing sufficiently detailed hazard models. These models are used to develop mitigation plans and strategies for reducing the impacts of cyclones. This study developed and tested a hazard modelling approach for cyclone impacts in Sarankhola upazila, a 151 km2 local government area in coastal Bangladesh. The study integrated remote sensing, spatial analysis and field data to model cyclone generated hazards under a climate change scenario at local scales covering model integrating historical cyclone data and Digital Elevation Model (DEM) was used to generate the cyclone hazard maps for different cyclone return periods. Frequency analysis was carried out using historical cyclone data (1960--2015) to calculate the storm surge heights of 5, 10, 20, 50 and 100 year return periods of cyclones. Local sea level rise scenario of 0.34 m for the year 2050 was simulated with 20 and 50 years return periods. Our results showed that cyclone affected areas increased with the increase of return periods. Around 63% of study area was located in the moderate to very high hazard zones for 50 year return period, while it was 70% for 100 year return period. The climate change scenarios increased the cyclone impact area by 6-10 % in every return period. Our findings indicate this approach has potential to model the cyclone hazards for developing mitigation plans and strategies to reduce the future impacts of cyclones.
Measures to assess the prognostic ability of the stratified Cox proportional hazards model
DEFF Research Database (Denmark)
(Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne
2009-01-01
Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures w...
DEM resolution effects on shallow landslide hazard and soil redistribution modelling
Claessens, L.F.G.; Heuvelink, G.B.M.; Schoorl, J.M.; Veldkamp, A.
2005-01-01
In this paper we analyse the effects of digital elevation model (DEM) resolution on the results of a model that simulates spatially explicit relative shallow landslide hazard and soil redistribution patterns and quantities. We analyse distributions of slope, specific catchment area and relative haza
Validation and evaluation of predicitive models in hazard assessment and risk management
Beguería, S.
2007-01-01
The paper deals with the validation and evaluation of mathematical models in natural hazard analysis, with a special focus on establishing their predictive power. Although most of the tools and statistics available are common to general classification models, some peculiarites arise in the case of h
Single-Index Additive Vector Autoregressive Time Series Models
LI, YEHUA
2009-09-01
We study a new class of nonlinear autoregressive models for vector time series, where the current vector depends on single-indexes defined on the past lags and the effects of different lags have an additive form. A sufficient condition is provided for stationarity of such models. We also study estimation of the proposed model using P-splines, hypothesis testing, asymptotics, selection of the order of the autoregression and of the smoothing parameters and nonlinear forecasting. We perform simulation experiments to evaluate our model in various settings. We illustrate our methodology on a climate data set and show that our model provides more accurate yearly forecasts of the El Niño phenomenon, the unusual warming of water in the Pacific Ocean. © 2009 Board of the Foundation of the Scandinavian Journal of Statistics.
Validation of transport models using additive flux minimization technique
Energy Technology Data Exchange (ETDEWEB)
Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)
2013-10-15
A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.
Three multimedia models used at hazardous and radioactive waste sites
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, P.D.; Pardi, R.; Fthenakis, V.M.; Holtzman, S.; Sun, L.C. [Brookhaven National Lab., Upton, NY (United States); Rambaugh, J.O.; Potter, S. [Geraghty and Miller, Inc., Plainview, NY (United States)
1996-02-01
Multimedia models are used commonly in the initial phases of the remediation process where technical interest is focused on determining the relative importance of various exposure pathways. This report provides an approach for evaluating and critically reviewing the capabilities of multimedia models. This study focused on three specific models MEPAS Version 3.0, MMSOILS Version 2.2, and PRESTO-EPA-CPG Version 2.0. These models evaluate the transport and fate of contaminants from source to receptor through more than a single pathway. The presence of radioactive and mixed wastes at a site poses special problems. Hence, in this report, restrictions associated with the selection and application of multimedia models for sites contaminated with radioactive and mixed wastes are highlighted. This report begins with a brief introduction to the concept of multimedia modeling, followed by an overview of the three models. The remaining chapters present more technical discussions of the issues associated with each compartment and their direct application to the specific models. In these analyses, the following components are discussed: source term; air transport; ground water transport; overland flow, runoff, and surface water transport; food chain modeling; exposure assessment; dosimetry/risk assessment; uncertainty; default parameters. The report concludes with a description of evolving updates to the model; these descriptions were provided by the model developers.
Modeling the influence of limestone addition on cement hydration
Directory of Open Access Journals (Sweden)
Ashraf Ragab Mohamed
2015-03-01
Full Text Available This paper addresses the influence of using Portland limestone cement “PLC” on cement hydration by characterization of its microstructure development. The European Standard EN 197-1:2011 and Egyptian specification ESS 4756-1/2009 permit the cement to contain up to 20% ground limestone. The computational tools assist in better understanding the influence of limestone additions on cement hydration and microstructure development to facilitate the acceptance of these more economical and ecological materials. μic model has been developed to enable the modeling of microstructural evolution of cementitious materials. In this research μic model is used to simulate both the influence of limestone as fine filler, providing additional surfaces for the nucleation and growth of hydration products. Limestone powder also reacts relatively slow with hydrating cement to form monocarboaluminate (AFmc phase, similar to the mono-sulfoaluminate (AFm phase formed in ordinary Portland cement. The model results reveal that limestone cement has accelerated cement hydration rate, previous experimental results and computer model “cemhyd3d” are used to validate this model.
Modeling Exposure to Persistent Chemicals in Hazard and Risk Assessment
Energy Technology Data Exchange (ETDEWEB)
Cowan-Ellsberry, Christina E.; McLachlan, Michael S.; Arnot, Jon A.; MacLeod, Matthew; McKone, Thomas E.; Wania, Frank
2008-11-01
Fate and exposure modeling has not thus far been explicitly used in the risk profile documents prepared to evaluate significant adverse effect of candidate chemicals for either the Stockholm Convention or the Convention on Long-Range Transboundary Air Pollution. However, we believe models have considerable potential to improve the risk profiles. Fate and exposure models are already used routinely in other similar regulatory applications to inform decisions, and they have been instrumental in building our current understanding of the fate of POP and PBT chemicals in the environment. The goal of this paper is to motivate the use of fate and exposure models in preparing risk profiles in the POP assessment procedure by providing strategies for incorporating and using models. The ways that fate and exposure models can be used to improve and inform the development of risk profiles include: (1) Benchmarking the ratio of exposure and emissions of candidate chemicals to the same ratio for known POPs, thereby opening the possibility of combining this ratio with the relative emissions and relative toxicity to arrive at a measure of relative risk. (2) Directly estimating the exposure of the environment, biota and humans to provide information to complement measurements, or where measurements are not available or are limited. (3) To identify the key processes and chemical and/or environmental parameters that determine the exposure; thereby allowing the effective prioritization of research or measurements to improve the risk profile. (4) Predicting future time trends including how quickly exposure levels in remote areas would respond to reductions in emissions. Currently there is no standardized consensus model for use in the risk profile context. Therefore, to choose the appropriate model the risk profile developer must evaluate how appropriate an existing model is for a specific setting and whether the assumptions and input data are relevant in the context of the application
Genomic breeding value estimation using nonparametric additive regression models
Directory of Open Access Journals (Sweden)
Solberg Trygve
2009-01-01
Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.
Modeling and Testing Landslide Hazard Using Decision Tree
Directory of Open Access Journals (Sweden)
Mutasem Sh. Alkhasawneh
2014-01-01
Full Text Available This paper proposes a decision tree model for specifying the importance of 21 factors causing the landslides in a wide area of Penang Island, Malaysia. These factors are vegetation cover, distance from the fault line, slope angle, cross curvature, slope aspect, distance from road, geology, diagonal length, longitude curvature, rugosity, plan curvature, elevation, rain perception, soil texture, surface area, distance from drainage, roughness, land cover, general curvature, tangent curvature, and profile curvature. Decision tree models are used for prediction, classification, and factors importance and are usually represented by an easy to interpret tree like structure. Four models were created using Chi-square Automatic Interaction Detector (CHAID, Exhaustive CHAID, Classification and Regression Tree (CRT, and Quick-Unbiased-Efficient Statistical Tree (QUEST. Twenty-one factors were extracted using digital elevation models (DEMs and then used as input variables for the models. A data set of 137570 samples was selected for each variable in the analysis, where 68786 samples represent landslides and 68786 samples represent no landslides. 10-fold cross-validation was employed for testing the models. The highest accuracy was achieved using Exhaustive CHAID (82.0% compared to CHAID (81.9%, CRT (75.6%, and QUEST (74.0% model. Across the four models, five factors were identified as most important factors which are slope angle, distance from drainage, surface area, slope aspect, and cross curvature.
Modelling long term survival with non-proportional hazards
Perperoglou, Aristidis
2006-01-01
In this work I consider models for survival data when the assumption of proportionality does not hold. The thesis consists of an Introduction, five papers, a Discussion and an Appendix. The Introduction presents technical information about the Cox model and introduces the ideas behind the extensions
Modeling contractor and company employee behavior in high hazard operation
Lin, P.H.; Hanea, D.; Ale, B.J.M.
2013-01-01
The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data. Howe
Predictive models in hazard assessment of Great Lakes contaminants for fish
Passino, Dora R. May
1986-01-01
A hazard assessment scheme was developed and applied to predict potential harm to aquatic biota of nearly 500 organic compounds detected by gas chromatography/mass spectrometry (GC/MS) in Great Lakes fish. The frequency of occurrence and estimated concentrations of compounds found in lake trout (Salvelinus namaycush) and walleyes (Stizostedion vitreum vitreum) were compared with available manufacturing and discharge information. Bioconcentration potential of the compounds was estimated from available data or from calculations of quantitative structure-activity relationships (QSAR). Investigators at the National Fisheries Research Center-Great Lakes also measured the acute toxicity (48-h EC50's) of 35 representative compounds to Daphnia pulex and compared the results with acute toxicity values generated by QSAR. The QSAR-derived toxicities for several chemicals underestimated the actual acute toxicity by one or more orders of magnitude. A multiple regression of log EC50 on log water solubility and molecular volume proved to be a useful predictive model. Additional models providing insight into toxicity incorporate solvatochromic parameters that measure dipolarity/polarizability, hydrogen bond acceptor basicity, and hydrogen bond donor acidity of the solute (toxicant).
Paprotny, Dominik; Morales Nápoles, Oswaldo
2016-04-01
Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.
Additive manufacturing for consumer-centric business models
DEFF Research Database (Denmark)
Bogers, Marcel; Hadar, Ronen; Bilberg, Arne
2016-01-01
Digital fabrication—including additive manufacturing (AM), rapid prototyping and 3D printing—has the potential to revolutionize the way in which products are produced and delivered to the customer. Therefore, it challenges companies to reinvent their business model—describing the logic of creating...... and capturing value. In this paper, we explore the implications that AM technologies have for manufacturing systems in the new business models that they enable. In particular, we consider how a consumer goods manufacturer can organize the operations of a more open business model when moving from a manufacturer......-centric to a consumer-centric value logic. A major shift includes a move from centralized to decentralized supply chains, where consumer goods manufacturers can implement a “hybrid” approach with a focus on localization and accessibility or develop a fully personalized model where the consumer effectively takes over...
Semiparametric Additive Transformation Model under Current Status Data
Cheng, Guang
2011-01-01
We consider the efficient estimation of the semiparametric additive transformation model with current status data. A wide range of survival models and econometric models can be incorporated into this general transformation framework. We apply the B-spline approach to simultaneously estimate the linear regression vector, the nondecreasing transformation function, and a set of nonparametric regression functions. We show that the parametric estimate is semiparametric efficient in the presence of multiple nonparametric nuisance functions. An explicit consistent B-spline estimate of the asymptotic variance is also provided. All nonparametric estimates are smooth, and shown to be uniformly consistent and have faster than cubic rate of convergence. Interestingly, we observe the convergence rate interfere phenomenon, i.e., the convergence rates of B-spline estimators are all slowed down to equal the slowest one. The constrained optimization is not required in our implementation. Numerical results are used to illustra...
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
Keith, A. M.; Weigel, A. M.; Rivas, J.
2014-12-01
Copahue is a stratovolcano located along the rim of the Caviahue Caldera near the Chile-Argentina border in the Andes Mountain Range. There are several small towns located in proximity of the volcano with the two largest being Banos Copahue and Caviahue. During its eruptive history, it has produced numerous lava flows, pyroclastic flows, ash deposits, and lahars. This isolated region has steep topography and little vegetation, rendering it poorly monitored. The need to model volcanic hazard risk has been reinforced by recent volcanic activity that intermittently released several ash plumes from December 2012 through May 2013. Exposure to volcanic ash is currently the main threat for the surrounding populations as the volcano becomes more active. The goal of this project was to study Copahue and determine areas that have the highest potential of being affected in the event of an eruption. Remote sensing techniques were used to examine and identify volcanic activity and areas vulnerable to experiencing volcanic hazards including volcanic ash, SO2 gas, lava flow, pyroclastic density currents and lahars. Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI), EO-1 Advanced Land Imager (ALI), Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mission (SRTM), ISS ISERV Pathfinder, and Aura Ozone Monitoring Instrument (OMI) products were used to analyze volcanic hazards. These datasets were used to create a historic lava flow map of the Copahue volcano by identifying historic lava flows, tephra, and lahars both visually and spectrally. Additionally, a volcanic risk and hazard map for the surrounding area was created by modeling the possible extent of ash fallout, lahars, lava flow, and pyroclastic density currents (PDC) for future eruptions. These model results were then used to identify areas that should be prioritized for disaster relief and evacuation orders.
Testing exclusion restrictions and additive separability in sample selection models
DEFF Research Database (Denmark)
Huber, Martin; Mellace, Giovanni
2014-01-01
Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...
A NOVEL SOFT COMPUTING MODEL ON LANDSLIDE HAZARD ZONE MAPPING
Directory of Open Access Journals (Sweden)
Iqbal Quraishi
2012-11-01
Full Text Available The effect of landslide is very prominent in India as well as world over. In India North-East region and all the areas beneath the Himalayan range is prone to landslide. As state wise Uttrakhand, Himachal Pradesh and northern part of West Bengal are identified as a risk zone for landslide. In West Bengal, Darjeeling area is identified as our focus zone. There are several types of landslides depending upon various conditions. Most contributing factor of landslide is Earthquakes. Both field and the GIS data are very versatile and large in amount. Creating a proper data warehouse includes both Remote and field studies. Our proposed soft computing model merge the field and remote sensing data and create an optimized landslide susceptible map of the zone and also provide a broad risk assessment. It takes into account census and economic survey data as an input to calculate and predict the probable number of damaged houses, roads, other amenities including the effect on GDP. The model is highly customizable and tends to provide situation specific results. A fuzzy logic based approach has been considered to partially implement the model in terms of different parameter data sets to show the effectiveness of the proposed model.
Modeling geomagnetic induction hazards using a 3-D electrical conductivity model of Australia
Wang, Liejun; Lewis, Andrew M.; Ogawa, Yasuo; Jones, William V.; Costelloe, Marina T.
2016-12-01
The surface electric field induced by external geomagnetic source fields is modeled for a continental-scale 3-D electrical conductivity model of Australia at periods of a few minutes to a few hours. The amplitude and orientation of the induced electric field at periods of 360 s and 1800 s are presented and compared to those derived from a simplified ocean-continent (OC) electrical conductivity model. It is found that the induced electric field in the Australian region is distorted by the heterogeneous continental electrical conductivity structures and surrounding oceans. On the northern coastlines, the induced electric field is decreased relative to the simple OC model due to a reduced conductivity contrast between the seas and the enhanced conductivity structures inland. In central Australia, the induced electric field is less distorted with respect to the OC model as the location is remote from the oceans, but inland crustal high-conductivity anomalies are the major source of distortion of the induced electric field. In the west of the continent, the lower conductivity of the Western Australia Craton increases the conductivity contrast between the deeper oceans and land and significantly enhances the induced electric field. Generally, the induced electric field in southern Australia, south of latitude -20°, is higher compared to northern Australia. This paper provides a regional indicator of geomagnetic induction hazards across Australia.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
JIANG JianCheng; LI JianTao
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.
Combining computational models for landslide hazard assessment of Guantánamo province, Cuba
Castellanos Abella, E.A.
2009-01-01
As part of the Cuban system for landslide disaster management, a methodology was developed for regional scale landslide hazard assessment, which is a combination of different models. The method was applied in Guantánamo province at 1:100 000 scale. The analysis started with an extensive aerial photo
Rasmussen, Andrew
2004-01-01
This study extends literature on recidivism after teen court to add system-level variables to demographic and sentence content as relevant covariates. Interviews with referral agents and survival analysis with proportional hazards regression supplement quantitative models that include demographic, sentencing, and case-processing variables in a…
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
Additive Manufacturing of Medical Models--Applications in Rhinology.
Raos, Pero; Klapan, Ivica; Galeta, Tomislav
2015-09-01
In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Malliavin's calculus in insider models: Additional utility and free lunches
2002-01-01
We consider simple models of financial markets with regular traders and insiders possessing some extra information hidden in a random variable which is accessible to the regular trader only at the end of the trading interval. The problems we focus on are the calculation of the additional utility of the insider and a study of his free lunch possibilities. The information drift, i.e. the drift to eliminate in order to preserve the martingale property in the insider's filtration, turns out to be...
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Building a risk-targeted regional seismic hazard model for South-East Asia
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Kinetics approach to modeling of polymer additive degradation in lubricants
Institute of Scientific and Technical Information of China (English)
llyaI.KUDISH; RubenG.AIRAPETYAN; Michael; J.; COVITCH
2001-01-01
A kinetics problem for a degrading polymer additive dissolved in a base stock is studied.The polymer degradation may be caused by the combination of such lubricant flow parameters aspressure, elongational strain rate, and temperature as well as lubricant viscosity and the polymercharacteristics (dissociation energy, bead radius, bond length, etc.). A fundamental approach tothe problem of modeling mechanically induced polymer degradation is proposed. The polymerdegradation is modeled on the basis of a kinetic equation for the density of the statistical distribu-tion of polymer molecules as a function of their molecular weight. The integrodifferential kineticequation for polymer degradation is solved numerically. The effects of pressure, elongational strainrate, temperature, and lubricant viscosity on the process of lubricant degradation are considered.The increase of pressure promotes fast degradation while the increase of temperature delaysdegradation. A comparison of a numerically calculated molecular weight distribution with an ex-perimental one obtained in bench tests showed that they are in excellent agreement with eachother.
Multiscale Modeling of Powder Bed-Based Additive Manufacturing
Markl, Matthias; Körner, Carolin
2016-07-01
Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.
An example of debris-flows hazard modeling using GIS
Directory of Open Access Journals (Sweden)
L. Melelli
2004-01-01
Full Text Available We present a GIS-based model for predicting debris-flows occurrence. The availability of two different digital datasets and the use of a Digital Elevation Model (at a given scale have greatly enhanced our ability to quantify and to analyse the topography in relation to debris-flows. In particular, analysing the relationship between debris-flows and the various causative factors provides new understanding of the mechanisms. We studied the contact zone between the calcareous basement and the fluvial-lacustrine infill adjacent northern area of the Terni basin (Umbria, Italy, and identified eleven basins and corresponding alluvial fans. We suggest that accumulations of colluvium in topographic hollows, whatever the sources might be, should be considered potential debris-flow source areas. In order to develop a susceptibility map for the entire area, an index was calculated from the number of initiation locations in each causative factor unit divided by the areal extent of that unit within the study area. This index identifies those units that produce the most debris-flows in each Representative Elementary Area (REA. Finally, the results are presented with the advantages and the disadvantages of the approach, and the need for further research.
Using the RBFN model and GIS technique to assess wind erosion hazards of Inner Mongolia, China
Shi, Huading; Liu, Jiyuan; Zhuang, Dafang; Hu, Yunfeng
2006-08-01
Soil wind erosion is the primary process and the main driving force for land desertification and sand-dust storms in arid and semi-arid areas of Northern China. Many researchers have paid more attention to this issue. This paper select Inner Mongolia autonomous region as the research area, quantify the various indicators affecting the soil wind erosion, using the GIS technology to extract the spatial data, and construct the RBFN (Radial Basis Function Network) model for assessment of wind erosion hazard. After training the sample data of the different levels of wind erosion hazard, we get the parameters of the model, and then assess the wind erosion hazard. The result shows that in the Southern parts of Inner Mongolia wind erosion hazard are very severe, counties in the middle regions of Inner Mongolia vary from moderate to severe, and in eastern are slight. The comparison of the result with other researches shows that the result is in conformity with actual conditions, proving the reasonability and applicability of the RBFN model.
Model Checking Vector Addition Systems with one zero-test
Bonet, Rémi; Leroux, Jérôme; Zeitoun, Marc
2012-01-01
We design a variation of the Karp-Miller algorithm to compute, in a forward manner, a finite representation of the cover (i.e., the downward closure of the reachability set) of a vector addition system with one zero-test. This algorithm yields decision procedures for several problems for these systems, open until now, such as place-boundedness or LTL model-checking. The proof techniques to handle the zero-test are based on two new notions of cover: the refined and the filtered cover. The refined cover is a hybrid between the reachability set and the classical cover. It inherits properties of the reachability set: equality of two refined covers is undecidable, even for usual Vector Addition Systems (with no zero-test), but the refined cover of a Vector Addition System is a recursive set. The second notion of cover, called the filtered cover, is the central tool of our algorithms. It inherits properties of the classical cover, and in particular, one can effectively compute a finite representation of this set, e...
WATEQ3 geochemical model: thermodynamic data for several additional solids
Energy Technology Data Exchange (ETDEWEB)
Krupka, K.M.; Jenne, E.A.
1982-09-01
Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.
Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards
Energy Technology Data Exchange (ETDEWEB)
Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J
2007-11-26
This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.
A Mathematical Model for the Industrial Hazardous Waste Location-Routing Problem
Directory of Open Access Journals (Sweden)
Omid Boyer
2013-01-01
Full Text Available Technology progress is a cause of industrial hazardous wastes increasing in the whole world . Management of hazardous waste is a significant issue due to the imposed risk on environment and human life. This risk can be a result of location of undesirable facilities and also routing hazardous waste. In this paper a biobjective mixed integer programing model for location-routing industrial hazardous waste with two objectives is developed. First objective is total cost minimization including transportation cost, operation cost, initial investment cost, and cost saving from selling recycled waste. Second objective is minimization of transportation risk. Risk of population exposure within bandwidth along route is used to measure transportation risk. This model can help decision makers to locate treatment, recycling, and disposal centers simultaneously and also to route waste between these facilities considering risk and cost criteria. The results of the solved problem prove conflict between two objectives. Hence, it is possible to decrease the cost value by marginally increasing the transportation risk value and vice versa. A weighted sum method is utilized to combine two objectives function into one objective function. To solve the problem GAMS software with CPLEX solver is used. The problem is applied in Markazi province in Iran.
Critical load analysis in hazard assessment of metals using a Unit World Model.
Gandhi, Nilima; Bhavsar, Satyendra P; Diamond, Miriam L
2011-09-01
A Unit World approach has been used extensively to rank chemicals for their hazards and to understand differences in chemical behavior. Whereas the fate and effects of an organic chemical in a Unit World Model (UWM) analysis vary systematically according to one variable (fraction of organic carbon), and the chemicals have a singular ranking regardless of environmental characteristics, metals can change their hazard ranking according to freshwater chemistry, notably pH and dissolved organic carbon (DOC). Consequently, developing a UWM approach for metals requires selecting a series of representative freshwater chemistries, based on an understanding of the sensitivity of model results to this chemistry. Here we analyze results from a UWM for metals with the goal of informing the selection of appropriate freshwater chemistries for a UWM. The UWM loosely couples the biotic ligand model (BLM) to a geochemical speciation model (Windermere Humic Adsorption Model [WHAM]) and then to the multi-species fate transport-speciation (Transpec) model. The UWM is applied to estimate the critical load (CL) of cationic metals Cd, Cu, Ni, Pb, and Zn, using three lake chemistries that vary in trophic status, pH, and other parameters. The model results indicated a difference of four orders of magnitude in particle-to-total dissolved partitioning (K(d)) that translated into minimal differences in fate because of the short water residence time used. However, a maximum 300-fold difference was calculated in Cu toxicity among the three chemistries and three aquatic organisms. Critical loads were lowest (greatest hazard) in the oligotrophic water chemistry and highest (least hazard) in the eutrophic water chemistry, despite the highest fraction of free metal ion as a function of total metal occurring in the mesotrophic system, where toxicity was ameliorated by competing cations. Water hardness, DOC, and pH had the greatest influence on CL, because of the influence of these factors on aquatic
Komjathy, Attila; Yang, Yu-Ming; Meng, Xing; Verkhoglyadova, Olga; Mannucci, Anthony J.; Langley, Richard B.
2016-07-01
Natural hazards including earthquakes, volcanic eruptions, and tsunamis have been significant threats to humans throughout recorded history. Global navigation satellite systems (GNSS; including the Global Positioning System (GPS)) receivers have become primary sensors to measure signatures associated with natural hazards. These signatures typically include GPS-derived seismic deformation measurements, coseismic vertical displacements, and real-time GPS-derived ocean buoy positioning estimates. Another way to use GPS observables is to compute the ionospheric total electron content (TEC) to measure, model, and monitor postseismic ionospheric disturbances caused by, e.g., earthquakes, volcanic eruptions, and tsunamis. In this paper, we review research progress at the Jet Propulsion Laboratory and elsewhere using examples of ground-based and spaceborne observation of natural hazards that generated TEC perturbations. We present results for state-of-the-art imaging using ground-based and spaceborne ionospheric measurements and coupled atmosphere-ionosphere modeling of ionospheric TEC perturbations. We also report advancements and chart future directions in modeling and inversion techniques to estimate tsunami wave heights and ground surface displacements using TEC measurements and error estimates. Our initial retrievals strongly suggest that both ground-based and spaceborne GPS remote sensing techniques could play a critical role in detection and imaging of the upper atmosphere signatures of natural hazards including earthquakes and tsunamis. We found that combining ground-based and spaceborne measurements may be crucial in estimating critical geophysical parameters such as tsunami wave heights and ground surface displacements using TEC observations. The GNSS-based remote sensing of natural-hazard-induced ionospheric disturbances could be applied to and used in operational tsunami and earthquake early warning systems.
Techniques, advances, problems and issues in numerical modelling of landslide hazard
Van Asch, Theo; Van Beek, Ludovicus; Amitrano, David
2007-01-01
Slope movements (e.g. landslides) are dynamic systems that are complex in time and space and closely linked to both inherited and current preparatory and triggering controls. It is not yet possible to assess in all cases conditions for failure, reactivation and rapid surges and successfully simulate their transient and multi-dimensional behaviour and development, although considerable progress has been made in isolating many of the key variables and elementary mechanisms and to include them in physically-based models for landslide hazard assessments. Therefore, the objective of this paper is to review the state-of-the-art in the understanding of landslide processes and to identify some pressing challenges for the development of our modelling capabilities in the forthcoming years for hazard assessment. This paper focuses on the special nature of slope movements and the difficulties related to simulating their complex time-dependent behaviour in mathematical, physically-based models. It analyses successively th...
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
Directory of Open Access Journals (Sweden)
Till D Frank
Full Text Available We derive a statistical model of transcriptional activation using equilibrium thermodynamics of chemical reactions. We examine to what extent this statistical model predicts synergy effects of cooperative activation of gene expression. We determine parameter domains in which greater-than-additive and less-than-additive effects are predicted for cooperative regulation by two activators. We show that the statistical approach can be used to identify different causes of synergistic greater-than-additive effects: nonlinearities of the thermostatistical transcriptional machinery and three-body interactions between RNA polymerase and two activators. In particular, our model-based analysis suggests that at low transcription factor concentrations cooperative activation cannot yield synergistic greater-than-additive effects, i.e., DNA transcription can only exhibit less-than-additive effects. Accordingly, transcriptional activity turns from synergistic greater-than-additive responses at relatively high transcription factor concentrations into less-than-additive responses at relatively low concentrations. In addition, two types of re-entrant phenomena are predicted. First, our analysis predicts that under particular circumstances transcriptional activity will feature a sequence of less-than-additive, greater-than-additive, and eventually less-than-additive effects when for fixed activator concentrations the regulatory impact of activators on the binding of RNA polymerase to the promoter increases from weak, to moderate, to strong. Second, for appropriate promoter conditions when activator concentrations are increased then the aforementioned re-entrant sequence of less-than-additive, greater-than-additive, and less-than-additive effects is predicted as well. Finally, our model-based analysis suggests that even for weak activators that individually induce only negligible increases in promoter activity, promoter activity can exhibit greater-than-additive
Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US
Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.
2015-12-01
Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.
Pal, Parimal; Das, Pallabi; Chakrabortty, Sankha; Thakura, Ritwik
2016-11-01
Dynamic modelling and simulation of a nanofiltration-forward osmosis integrated complete system was done along with economic evaluation to pave the way for scale up of such a system for treating hazardous pharmaceutical wastes. The system operated in a closed loop not only protects surface water from the onslaught of hazardous industrial wastewater but also saves on cost of fresh water by turning wastewater recyclable at affordable price. The success of dynamic modelling in capturing the relevant transport phenomena is well reflected in high overall correlation coefficient value (R (2) > 0.98), low relative error (forward osmosis loop at a reasonably high flux of 56-58 l per square meter per hour.
Model and Method for Multiobjective Time-Dependent Hazardous Material Transportation
Directory of Open Access Journals (Sweden)
Zhen Zhou
2014-01-01
Full Text Available In most of the hazardous material transportation problems, risk factors are assumed to be constant, which ignores the fact that they can vary with time throughout the day. In this paper, we deal with a novel time-dependent hazardous material transportation problem via lane reservation, in which the dynamic nature of transportation risk in the real-life traffic environment is taken into account. We first develop a multiobjective mixed integer programming (MIP model with two conflicting objectives: minimizing the impact on the normal traffic resulting from lane reservation and minimizing the total transportation risk. We then present a cut-and-solve based ε-constraint method to solve this model. Computational results indicate that our method outperforms the ε-constraint method based on optimization software package CPLEX.
Lambert, Cédric; Thoeni, Klaus; Giacomini, Anna; Casagrande, Davide; Sloan, Scott
2012-09-01
Developing an accurate representation of the rock mass fabric is a key element in rock fall hazard analysis. The orientation, persistence and density of fractures control the volume and shape of unstable blocks or compartments. In this study, the discrete fracture modelling technique and digital photogrammetry were used to accurately depict the fabric. A volume distribution of unstable blocks was derived combining polyhedral modelling and kinematic analyses. For each block size, probabilities of failure and probabilities of propagation were calculated. A complete energy distribution was obtained by considering, for each block size, its occurrence in the rock mass, its probability of falling, its probability to reach a given location, and the resulting distribution of energies at each location. This distribution was then used with an energy-frequency diagram to assess the hazard.
Westreich, Daniel; Cole, Stephen R; Schisterman, Enrique F; Platt, Robert W
2012-08-30
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
The Impact of the Subduction Modeling Beneath Calabria on Seismic Hazard
Morasca, P.; Johnson, W. J.; Del Giudice, T.; Poggi, P.; Traverso, C.; Parker, E. J.
2014-12-01
The aim of this work is to better understand the influence of subduction beneath Calabria on seismic hazard, as very little is known about present-day kinematics and the seismogenic potential of the slab interface in the Calabrian Arc region. This evaluation is significant because, depending on stress conditions, subduction zones can vary from being fully coupled to almost entirely decoupled with important consequences in the seismic hazard assessment. Although the debate is still open about the current kinematics of the plates and microplates lying in the region and the degree of coupling of Ionian lithosphere beneath Calabria, GPS data suggest that this subduction is locked in its interface sector. Also the lack of instrumentally recorded thrust earthquakes suggests this zone is locked. The current seismotectonic model developed for the Italian National territory is simplified in this area and does not reflect the possibility of locked subduction beneath the Calabria that could produce infrequent, but very large earthquakes associated with the subduction interface. Because of this we have conducted an independent seismic source analysis to take into account the influence of subduction as part of a regional seismic hazard analysis. Our final model includes two separate provinces for the subduction beneath the Calabria: inslab and interface. From a geometrical point of view the interface province is modeled with a depth between 20-50 km and a dip of 20°, while the inslab one dips 70° between 50 -100 km. Following recent interpretations we take into account that the interface subduction is possibly locked and, in such a case, large events could occur as characteristic earthquakes. The results of the PSHA analysis show that the subduction beneath the Calabrian region has an influence in the total hazard for this region, especially for long return periods. Regional seismotectonic models for this region should account for subduction.
Progress in NTHMP Hazard Assessment
Gonzalez, F.I.; Titov, V.V.; Mofjeld, H.O.; Venturato, A.J.; Simmons, R.S.; Hansen, R.; Combellick, R.; Eisner, R.K.; Hoirup, D.F.; Yanagi, B.S.; Yong, S.; Darienzo, M.; Priest, G.R.; Crawford, G.L.; Walsh, T.J.
2005-01-01
The Hazard Assessment component of the U.S. National Tsunami Hazard Mitigation Program has completed 22 modeling efforts covering 113 coastal communities with an estimated population of 1.2 million residents that are at risk. Twenty-three evacuation maps have also been completed. Important improvements in organizational structure have been made with the addition of two State geotechnical agency representatives to Steering Group membership, and progress has been made on other improvements suggested by program reviewers. ?? Springer 2005.
A nonparametric dynamic additive regression model for longitudinal data
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas H.
2000-01-01
dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models......dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models...
[Clinical research XXII. From clinical judgment to Cox proportional hazards model].
Pérez-Rodríguez, Marcela; Rivas-Ruiz, Rodolfo; Palacios-Cruz, Lino; Talavera, Juan O
2014-01-01
Survival analyses are commonly used to determine the time of an event (for example, death). However, they can be used also for other clinical outcomes on the condition that these are dichotomous, for example healing time. These analyses only consider the relationship of one variable. However, Cox proportional hazards model is a multivariate analysis of the survival analysis, in which other potentially confounding covariates of the effect of the main maneuver studied, such as age, gender or disease stage, are taken into account. This analysis can include both quantitative and qualitative variables in the model. The measure of association used is called hazard ratio (HR) or relative risk ratio, which is not the same as the relative risk or odds ratio (OR). The difference is that the HR refers to the possibility that one of the groups develops the event before it is compared with the other group. The proportional hazards multivariate model of Cox is the most widely used in medicine when the phenomenon is studied in two dimensions: time and event.
Rounce, D.; McKinney, D. C.
2015-12-01
The last half century has witnessed considerable glacier melt that has led to the formation of large glacial lakes. These glacial lakes typically form behind terminal moraines comprising loose boulders, debris, and soil, which are susceptible to fail and cause a glacial lake outburst flood (GLOF). These lakes also act as a heat sink that accelerates glacier melt and in many cases is accompanied by rapid areal expansion. As these glacial lakes continue to grow, their hazard also increases due to the increase in potential flood volume and the lakes' proximity to triggering events such as avalanches and landslides. Despite the large threat these lakes may pose to downstream communities, there are few detailed studies that combine satellite imagery with hydraulic models to present a holistic understanding of the GLOF hazard. The aim of this work is to assess the GLOF hazard of glacial lakes in Nepal using a holistic approach based on a combination of satellite imagery and hydraulic models. Imja Lake will be the primary focus of the modeling efforts, but the methods will be developed in a manner that is transferable to other potentially dangerous glacial lakes in Nepal.
Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.
2015-12-01
Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.
Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling
Directory of Open Access Journals (Sweden)
G. Delmonaco
2003-01-01
Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Sampson, Christopher; Smith, Andrew; Bates, Paul; Neal, Jeffrey; Trigg, Mark
2015-12-01
Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet's surface. The difficulty of deriving an accurate 'bare-earth' terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed.
Yazdani, Azad; Nicknam, Ahmad; Dadras, Ehsan Yousefi; Eftekhari, Seyed Nasrollah
2017-01-01
Ground motions are affected by directivity effects at near-fault regions which result in low-frequency cycle pulses at the beginning of the velocity time history. The directivity features of near-fault ground motions can lead to significant increase in the risk of earthquake-induced damage on engineering structures. The ordinary probabilistic seismic hazard analysis (PSHA) does not take into account such effects; recent studies have thus proposed new frameworks to incorporate directivity effects in PSHA. The objective of this study is to develop the seismic hazard mapping of Tehran City according to near-fault PSHA procedure for different return periods. To this end, the directivity models required in the modified PSHA were developed based on a database of the simulated ground motions. The simulated database was used in this study because there are no recorded near-fault data in the region to derive purely empirically based pulse prediction models. The results show that the directivity effects can significantly affect the estimate of regional seismic hazard.
Percolation model with an additional source of disorder
Kundu, Sumanta; Manna, S. S.
2016-06-01
The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.
Assessing rainfall triggered landslide hazards through physically based models under uncertainty
Balin, D.; Metzger, R.; Fallot, J. M.; Reynard, E.
2009-04-01
Hazard and risk assessment require, besides good data, good simulation capabilities to allow prediction of events and their consequences. The present study introduces a landslide hazards assessment strategy based on the coupling of hydrological physically based models with slope stability models that should be able to cope with uncertainty of input data and model parameters. The hydrological model used is based on the Water balance Simulation Model, WASIM-ETH (Schulla et al., 1997), a fully distributed hydrological model that has been successfully used previously in the alpine regions to simulate runoff, snowmelt, glacier melt, and soil erosion and impact of climate change on these. The study region is the Vallon de Nant catchment (10km2) in the Swiss Alps. A sound sensitivity analysis will be conducted in order to choose the discretization threshold derived from a Laser DEM model, to which the hydrological model yields the best compromise between performance and time computation. The hydrological model will be further coupled with slope stability methods (that use the topographic index and the soil moisture such as derived from the hydrological model) to simulate the spatial distribution of the initiation areas of different geomorphic processes such as debris flows and rainfall triggered landslides. To calibrate the WASIM-ETH model, the Monte Carlo Markov Chain Bayesian approach is privileged (Balin, 2004, Schaefli et al., 2006). The model is used in a single and a multi-objective frame to simulate discharge and soil moisture with uncertainty at representative locations. This information is further used to assess the potential initial areas for rainfall triggered landslides and to study the impact of uncertain input data, model parameters and simulated responses (discharge and soil moisture) on the modelling of geomorphological processes.
Detailed Flood Modeling and Hazard Assessment from Storm Tides, Rainfall and Sea Level Rise
Orton, P. M.; Hall, T. M.; Georgas, N.; Conticello, F.; Cioffi, F.; Lall, U.; Vinogradov, S. V.; Blumberg, A. F.
2014-12-01
A flood hazard assessment has been conducted for the Hudson River from New York City to Troy at the head of tide, using a three-dimensional hydrodynamic model and merging hydrologic inputs and storm tides from tropical and extra-tropical cyclones, as well as spring freshet floods. Our recent work showed that neglecting freshwater flows leads to underestimation of peak water levels at up-river sites and neglecting stratification (typical with two-dimensional modeling) leads to underestimation all along the Hudson. The hazard assessment framework utilizes a representative climatology of over 1000 synthetic tropical cyclones (TCs) derived from a statistical-stochastic TC model, and historical extra-tropical cyclones and freshets from 1950-present. Hydrodynamic modeling is applied with seasonal variations in mean sea level and ocean and estuary stratification. The model is the Stevens ECOM model and is separately used for operational ocean forecasts on the NYHOPS domain (http://stevens.edu/NYHOPS). For the synthetic TCs, an Artificial Neural Network/ Bayesian multivariate approach is used for rainfall-driven freshwater inputs to the Hudson, translating the TC attributes (e.g. track, SST, wind speed) directly into tributary stream flows (see separate presentation by Cioffi for details). Rainfall intensity has been rising in recent decades in this region, and here we will also examine the sensitivity of Hudson flooding to future climate warming-driven increases in storm precipitation. The hazard assessment is being repeated for several values of sea level, as projected for future decades by the New York City Panel on Climate Change. Recent studies have given widely varying estimates of the present-day 100-year flood at New York City, from 2.0 m to 3.5 m, and special emphasis will be placed on quantifying our study's uncertainty.
Hyperbolic value addition and general models of animal choice.
Mazur, J E
2001-01-01
Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.
Directory of Open Access Journals (Sweden)
Vahdettin Demir
2016-01-01
Full Text Available In this study, flood hazard maps were prepared for the Mert River Basin, Samsun, Turkey, by using GIS and Hydrologic Engineering Centers River Analysis System (HEC-RAS. In this river basin, human life losses and a significant amount of property damages were experienced in 2012 flood. The preparation of flood risk maps employed in the study includes the following steps: (1 digitization of topographical data and preparation of digital elevation model using ArcGIS, (2 simulation of flood lows of different return periods using a hydraulic model (HEC-RAS, and (3 preparation of flood risk maps by integrating the results of (1 and (2.
Additive interaction in survival analysis
DEFF Research Database (Denmark)
Rod, Naja Hulvej; Lange, Theis; Andersen, Ingelise
2012-01-01
is the relevant measure of interest. Multiplicative survival models, such as the Cox proportional hazards model, are often used to estimate the association between exposure and risk of disease in prospective studies. In Cox models, deviations from additivity have usually been assessed by surrogate measures...... of additive interaction derived from multiplicative models-an approach that is both counter-intuitive and sometimes invalid. This paper presents a straightforward and intuitive way of assessing deviation from additivity of effects in survival analysis by use of the additive hazards model. The model directly...... estimates the absolute size of the deviation from additivity and provides confidence intervals. In addition, the model can accommodate both continuous and categorical exposures and models both exposures and potential confounders on the same underlying scale. To illustrate the approach, we present...
Directory of Open Access Journals (Sweden)
N. N. Belyayev
2014-05-01
Full Text Available Purpose. Chemically hazardous objects, where toxic substances are used, manufactured and stored, and also main lines, on which the hazardous materials transportation is conducted, pose potential sources of atmosphere accidental pollution.Development of the CFD model for evaluating the efficiency of the building local protection from hazardous substantives ingress by using air curtain and sorption/desorption of hazardous substance on indoor surfaces. Methodology. To solve the problem of hydrodynamic interaction of the air curtain with wind flow and considering the building influence on this process the model of ideal fluid is used. In order to calculate the transfer process of the hazardous substance in the atmosphere an equation of convection-diffusion transport of impurities is applied. To calculate the process of indoors air pollution under leaking of foul air Karisson & Huber model is used. This model takes into account the sorption of the hazardous substance at various indoors surfaces. For the numerical integration of the model equations differential methods are used. Findings. In this paper we construct an efficient CFD model of evaluating the effectiveness of the buildings protection against ingress of hazardous substances through the use of an air curtain. On the basis of the built model a computational experiment to assess the effectiveness of this protection method under varying the location of the air curtain relative to the building was carried out. Originality. A new model was developed to compute the effectiveness of the air curtain supply to reduce the toxic chemical concentration inside the building. Practical value. The developed model can be used for design of the building local protection against ingress of hazardous substances.
Earthquake catalogs for the 2017 Central and Eastern U.S. short-term seismic hazard model
Mueller, Charles S.
2017-01-01
The U. S. Geological Survey (USGS) makes long-term seismic hazard forecasts that are used in building codes. The hazard models usually consider only natural seismicity; non-tectonic (man-made) earthquakes are excluded because they are transitory or too small. In the past decade, however, thousands of earthquakes related to underground fluid injection have occurred in the central and eastern U.S. (CEUS), and some have caused damage. In response, the USGS is now also making short-term forecasts that account for the hazard from these induced earthquakes. Seismicity statistics are analyzed to develop recurrence models, accounting for catalog completeness. In the USGS hazard modeling methodology, earthquakes are counted on a map grid, recurrence models are applied to estimate the rates of future earthquakes in each grid cell, and these rates are combined with maximum-magnitude models and ground-motion models to compute the hazard The USGS published a forecast for the years 2016 and 2017.Here, we document the development of the seismicity catalogs for the 2017 CEUS short-term hazard model. A uniform earthquake catalog is assembled by combining and winnowing pre-existing source catalogs. The initial, final, and supporting earthquake catalogs are made available here.
Measurement Error in Proportional Hazards Models for Survival Data with Long-term Survivors
Institute of Scientific and Technical Information of China (English)
Xiao-bing ZHAO; Xian ZHOU
2012-01-01
This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the na?ve estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered.
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.
Darnell, A. R.; Phillips, J. C.; Barclay, J.; Herd, R. A.; Lovett, A. A.; Cole, P. D.
2013-04-01
In this study, we present a geographical information system (GIS)-based approach to enable the estimation of lahar features important to rapid hazard assessment (including flow routes, velocities and travel times). Our method represents a simplified first stage in extending the utility of widely used existing GIS-based inundation models, such as LAHARZ, to provide estimates of flow speeds. LAHARZ is used to determine the spatial distribution of a lahar of constant volume, and for a given cell in a GIS grid, a single-direction flow routing technique incorporating the effect of surface roughness directs the flow according to steepest descent. The speed of flow passing through a cell is determined from coupling the flow depth, change in elevation and roughness using Manning's formula, and in areas where there is little elevation difference, flow is routed to locally maximum increase in velocity. Application of this methodology to lahars on Montserrat, West Indies, yielded support for this GIS-based approach as a hazard assessment tool through tests on small volume (5,000-125,000 m3) dilute lahars (consistent with application of Manning's law). Dominant flow paths were mapped, and for the first time in this study area, velocities (magnitudes and spatial distribution) and average travel times were estimated for a range of lahar volumes. Flow depth approximations were also made using (modified) LAHARZ, and these refined the input to Manning's formula. Flow depths were verified within an order of magnitude by field observations, and velocity predictions were broadly consistent with proxy measurements and published data. Forecasts from this coupled method can operate on short to mid-term timescales for hazard management. The methodology has potential to provide a rapid preliminary hazard assessment in similar systems where data acquisition may be difficult.
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
Additive gamma frailty models with applications to competing risks in related individuals
DEFF Research Database (Denmark)
Eriksson, Frank; Scheike, Thomas
2015-01-01
Epidemiological studies of related individuals are often complicated by the fact that follow-up on the event type of interest is incomplete due to the occurrence of other events. We suggest a class of frailty models with cause-specific hazards for correlated competing events in related individuals...
Web-based Services for Earth Observing and Model Data in National Applications and Hazards
Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.
2005-12-01
The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid
McKeown, Gary J; Sneddon, Ian
2014-03-01
Emotion research has long been dominated by the "standard method" of displaying posed or acted static images of facial expressions of emotion. While this method has been useful, it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose generalized additive models and generalized additive mixed models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The generalized additive mixed model approach is preferred, as it can account for autocorrelation in time series data and allows emotion decoding participants to be modeled as random effects. To increase confidence in linear differences, we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition, we provide comments on the use of generalized additive models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally, we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.
CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 storm-hazard projections
Barnard, Patrick; Erikson, Li; O'Neill, Andrea; Foxgrover, Amy; Herdman, Liv
2017-01-01
The Coastal Storm Modeling System (CoSMoS) makes detailed predictions (meter-scale) over large geographic scales (100s of kilometers) of storm-induced coastal flooding and erosion for both current and future SLR scenarios, as well as long-term shoreline change and cliff retreat. Resulting projections for future climate scenarios (sea-level rise and storms) provide emergency responders and coastal planners with critical storm-hazards information that can be used to increase public safety, mitigate physical damages, and more effectively manage and allocate resources within complex coastal settings. Several versions of CoSMoS have been implemented for areas of the California coast, including Southern California, Central California, and San Francisco Bay, and further versions will be incorporated as additional regions and improvements are developed.
TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment
Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano
2016-04-01
Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an
Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.
Gür, Y
2014-12-01
The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.
Richly parameterized linear models additive, time series, and spatial models using random effects
Hodges, James S
2013-01-01
A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut
Primary circuit iodine model addition to IMPAIR-3
Energy Technology Data Exchange (ETDEWEB)
Osetek, D.J.; Louie, D.L.Y. [Los Alamos Technical Associates, Inc., Albuquerque, NM (United States); Guntay, S.; Cripps, R. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1996-12-01
As part of a continuing effort to provide the U.S. Department of Energy (DOE) Advanced Reactor Severe Accident Program (ARSAP) with complete iodine analysis capability, a task was undertaken to expand the modeling of IMPAIR-3, an iodine chemistry code. The expanded code will enable the DOE to include detailed iodine behavior in the assessment of severe accident source terms used in the licensing of U.S. Advanced Light Water Reactors (ALWRs). IMPAIR-3 was developed at the Paul Scherrer Institute (PSI), Switzerland, and has been used by ARSAP for the past two years to analyze containment iodine chemistry for ALWR source term analyses. IMPAIR-3 is primarily a containment code but the iodine chemistry inside the primary circuit (the Reactor Coolant System or RCS) may influence the iodine species released into the the containment; therefore, a RCS iodine chemistry model must be implemented in IMPAIR-3 to ensure thorough source term analysis. The ARSAP source term team and the PSI IMPAIR-3 developers are working together to accomplish this task. This cooperation is divided into two phases. Phase I, taking place in 1996, involves developing a stand-alone RCS iodine chemistry program called IMPRCS (IMPAIR -Reactor Coolant System). This program models a number of the chemical and physical processes of iodine that are thought to be important at conditions of high temperature and pressure in the RCS. In Phase II, which is tentatively scheduled for 1997, IMPRCS will be implemented as a subroutine in IMPAIR-3. To ensure an efficient calculation, an interface/tracking system will be developed to control the use of the RCS model from the containment model. These two models will be interfaced in such a way that once the iodine is released from the RCS, it will no longer be tracked by the RCS model but will be tracked by the containment model. All RCS thermal-hydraulic parameters will be provided by other codes. (author) figs., tabs., refs.
Hazard-consistent ground motions generated with a stochastic fault-rupture model
Energy Technology Data Exchange (ETDEWEB)
Nishida, Akemi, E-mail: nishida.akemi@jaea.go.jp [Center for Computational Science and e-Systems, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba 277-0871 (Japan); Igarashi, Sayaka, E-mail: igrsyk00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Sakamoto, Shigehiro, E-mail: shigehiro.sakamoto@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Uchiyama, Yasuo, E-mail: yasuo.uchiyama@sakura.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Yamamoto, Yu, E-mail: ymmyu-00@pub.taisei.co.jp [Technology Center, Taisei Corporation, 344-1 Nase-cho, Totsuka-ku, Yokohama 245-0051 (Japan); Muramatsu, Ken, E-mail: kmuramat@tcu.ac.jp [Department of Nuclear Safety Engineering, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo 158-8557 (Japan); Takada, Tsuyoshi, E-mail: takada@load.arch.t.u-tokyo.ac.jp [Department of Architecture, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-12-15
Conventional seismic probabilistic risk assessments (PRAs) of nuclear power plants consist of probabilistic seismic hazard and fragility curves. Even when earthquake ground-motion time histories are required, they are generated to fit specified response spectra, such as uniform hazard spectra at a specified exceedance probability. These ground motions, however, are not directly linked with seismic-source characteristics. In this context, the authors propose a method based on Monte Carlo simulations to generate a set of input ground-motion time histories to develop an advanced PRA scheme that can explain exceedance probability and the sequence of safety-functional loss in a nuclear power plant. These generated ground motions are consistent with seismic hazard at a reference site, and their seismic-source characteristics can be identified in detail. Ground-motion generation is conducted for a reference site, Oarai in Japan, the location of a hypothetical nuclear power plant. A total of 200 ground motions are generated, ranging from 700 to 1100 cm/s{sup 2} peak acceleration, which corresponds to a 10{sup −4} to 10{sup −5} annual exceedance frequency. In the ground-motion generation, seismic sources are selected according to their hazard contribution at the site, and Monte Carlo simulations with stochastic parameters for the seismic-source characteristics are then conducted until ground motions with the target peak acceleration are obtained. These ground motions are selected so that they are consistent with the hazard. Approximately 110,000 simulations were required to generate 200 ground motions with these peak accelerations. Deviations of peak ground motion acceleration generated for 1000–1100 cm/s{sup 2} range from 1.5 to 3.0, where the deviation is evaluated with peak ground motion accelerations generated from the same seismic source. Deviations of 1.0 to 3.0 for stress drops, one of the stochastic parameters of seismic-source characteristics, are required to
Non-additive model for specific heat of electrons
Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.
2016-10-01
By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.
Directory of Open Access Journals (Sweden)
Abdul Salam Soomro
2012-10-01
Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.
Additional Research Needs to Support the GENII Biosphere Models
Energy Technology Data Exchange (ETDEWEB)
Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen
2013-11-30
In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed
Generalized Additive Models, Cubic Splines and Penalized Likelihood.
1987-05-22
in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performanc
Pradhan, Biswajeet; Lee, Saro; Shattri, Mansor
This paper deals with landslide hazard analysis and cross-application using Geographic Information System (GIS) and remote sensing data for Cameron Highland, Penang Island and Selangor in Malaysia. The aim of this study was to cross-apply and verify a spatial probabilistic model for landslide hazard analysis. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys. Topographical/geological data and satellite images were collected and processed using GIS and image processing tools. There are ten landslide inducing parameters which are considered for the landslide hazard analysis. These parameters are topographic slope, aspect, curvature and distance from drainage, all derived from the topographic database; geology and distance from lineament, derived from the geologic database; landuse from Landsat satellite images; soil from the soil database; precipitation amount, derived from the rainfall database; and the vegetation index value from SPOT satellite images. These factors were analyzed using an artificial neural network model to generate the landslide hazard map. Each factor's weight was determined by the back-propagation training method. Then the landslide hazard indices were calculated using the trained back-propagation weights, and finally the landslide hazard map was generated using GIS tools. Landslide hazard maps were drawn for these three areas using artificial neural network model derived not only from the data for that area but also using the weight for each parameters, one of the statistical model, calculated from each of the other two areas (nine maps in all) as a cross-check of the validity of the method. For verification, the results of the analyses were compared, in each study area, with actual landslide locations. The verification results showed sufficient agreement between the presumptive hazard map and the existing data on landslide areas.
A "mental models" approach to the communication of subsurface hydrology and hazards
Gibson, Hazel; Stewart, Iain S.; Pahl, Sabine; Stokes, Alison
2016-05-01
Communicating information about geological and hydrological hazards relies on appropriately worded communications targeted at the needs of the audience. But what are these needs, and how does the geoscientist discern them? This paper adopts a psychological "mental models" approach to assess the public perception of the geological subsurface, presenting the results of attitudinal studies and surveys in three communities in the south-west of England. The findings reveal important preconceptions and misconceptions regarding the impact of hydrological systems and hazards on the geological subsurface, notably in terms of the persistent conceptualisation of underground rivers and the inferred relations between flooding and human activity. The study demonstrates how such mental models can provide geoscientists with empirical, detailed and generalised data of perceptions surrounding an issue, as well reveal unexpected outliers in perception that they may not have considered relevant, but which nevertheless may locally influence communication. Using this approach, geoscientists can develop information messages that more directly engage local concerns and create open engagement pathways based on dialogue, which in turn allow both geoscience "experts" and local "non-experts" to come together and understand each other more effectively.
Quantification of Inter-Tsunami Model Variability for Hazard Assessment Studies
Catalan, P. A.; Alcantar, A.; Cortés, P. I.
2014-12-01
There is a wide range of numerical models capable of modeling tsunamis, most of which have been properly validated and verified against standard benchmark cases and particular field or laboratory cases studies. Consequently, these models are regularly used as essential tools in estimating the tsunami hazard on coastal communities by scientists, or consulting companies, and treating model results in a deterministic way. Most of these models are derived from the same set of equations, typically the Non Linear Shallow Water Equations, to which ad-hoc terms are added to include physical effects such as friction, the Coriolis force, and others. However, not very often these models are used in unison to address the variability in the results. Therefore, in this contribution, we perform a high number of simulations using a set of numerical models and quantify the variability in the results. In order to reduce the influence of input data on the results, a single tsunami scenario is used over a common bathymetry. Next, we perform model comparisons as to asses sensitivity to changes in grid resolution and Manning roughness coefficients. Results are presented either as intra-model comparisons (sensitivity to changes using the same model) and inter-model comparisons (sensitivity to changing models). For the case tested, it was observed that most models reproduced fairly consistently the arrival and periodicity of the tsunami waves. However, variations in amplitude, characterized by the standard-deviation between model runs, could be as large as the mean signal. This level of variability is considered too large for deterministic assessment, reinforcing the idea that uncertainty needs to be included in such studies.
Energy Technology Data Exchange (ETDEWEB)
Suzette Payne
2006-04-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Energy Technology Data Exchange (ETDEWEB)
Suzette Payne
2007-08-01
This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
Paukatong, K V; Kunawasen, S
2001-01-01
Nham is a traditional Thai fermented pork sausage. The major ingredients of Nham are ground pork meat and shredded pork rind. Nham has been reported to be contaminated with Salmonella spp., Staphylococcus aureus, and Listeria monocytogenes. Therefore, it is a potential cause of foodborne diseases for consumers. A Hazard Analysis and Critical Control Points (HACCP) generic model has been developed for the Nham process. Nham processing plants were observed and a generic flow diagram of Nham processes was constructed. Hazard analysis was then conducted. Other than microbial hazards, the pathogens previously found in Nham, sodium nitrite and metal were identified as chemical and physical hazards in this product, respectively. Four steps in the Nham process have been identified as critical control points. These steps are the weighing of the nitrite compound, stuffing, fermentation, and labeling. The chemical hazard of nitrite must be controlled during the weighing step. The critical limit of nitrite levels in the Nham mixture has been set at 100-200 ppm. This level is high enough to control Clostridium botulinum but does not cause chemical hazards to the consumer. The physical hazard from metal clips could be prevented by visual inspection of every Nham product during stuffing. The microbiological hazard in Nham could be reduced in the fermentation process. The critical limit of the pH of Nham was set at lower than 4.6. Since this product is not cooked during processing, finally, educating the consumer, by providing information on the label such as "safe if cooked before consumption", could be an alternative way to prevent the microbiological hazards of this product.
Directory of Open Access Journals (Sweden)
J. A. Álvarez-Gómez
2013-05-01
Full Text Available El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences – finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
Development of models to inform a national Daily Landslide Hazard Assessment for Great Britain
Dijkstra, Tom A.; Reeves, Helen J.; Dashwood, Claire; Pennington, Catherine; Freeborough, Katy; Mackay, Jonathan D.; Uhlemann, Sebastian S.; Chambers, Jonathan E.; Wilkinson, Paul B.
2015-04-01
were combined with records of observed landslide events to establish which antecedent effective precipitation (AEP) signatures of different duration could be used as a pragmatic proxy for the occurrence of landslides. It was established that 1, 7, and 90 days AEP provided the most significant correlations and these were used to calculate the probability of at least one landslide occurring. The method was then extended over the period 2006 to 2014 and the results evaluated against observed occurrences. It is recognised that AEP is a relatively poor proxy for simulating effective stress conditions along potential slip surfaces. However, the temporal pattern of landslide probability compares well to the observed occurrences and provides a potential benefit to assist with the DLHA. Further work is continuing to fine-tune the model for landslide type, better spatial resolution of effective precipitation input and cross-reference to models that capture changes in water balance and conditions along slip surfaces. The latter is facilitated by intensive research at several field laboratories, such as the Hollin Hill site in Yorkshire, England. At this site, a decade of activity has generated a broad range of research and a wealth of data. This paper reports on one example of recent work; the characterisation of near surface hydrology using infiltration experiments where hydrological pathways are captured, among others, by electrical resistivity tomography. This research, which has further developed our understanding of soil moisture movement in a heterogeneous landslide complex, has highlighted the importance of establishing detailed ground models to enable determination of landslide potential at high resolution. In turn, the knowledge gained through this research is used to enhance the expertise for the daily landslide hazard assessments at a national scale.
Butler, David R.; Walsh, Stephen J.; Brown, Daniel G.
1991-01-01
Methods are described for using Landsat Thematic Mapper digital data and digital elevation models for the display of natural hazard sites in a mountainous region of northwestern Montana, USA. Hazard zones can be easily identified on the three-dimensional images. Proximity of facilities such as highways and building locations to hazard sites can also be easily displayed. A temporal sequence of Landsat TM (or similar) satellite data sets could also be used to display landscape changes associated with dynamic natural hazard processes.
Age-period-cohort models using smoothing splines: a generalized additive model approach.
Jiang, Bei; Carriere, Keumhee C
2014-02-20
Age-period-cohort (APC) models are used to analyze temporal trends in disease or mortality rates, dealing with linear dependency among associated effects of age, period, and cohort. However, the nature of sparseness in such data has severely limited the use of APC models. To deal with these practical limitations and issues, we advocate cubic smoothing splines. We show that the methods of estimable functions proposed in the framework of generalized linear models can still be considered to solve the non-identifiability problem when the model fitting is within the framework of generalized additive models with cubic smoothing splines. Through simulation studies, we evaluate the performance of the cubic smoothing splines in terms of the mean squared errors of estimable functions. Our results support the use of cubic smoothing splines for APC modeling with sparse but unaggregated data from a Lexis diagram.
Hayes, P.; Trigg, J. L.; Stauffer, D.; Hunter, G.; McQueen, J.
2006-05-01
Consequence assessment (CA) operations are those processes that attempt to mitigate negative impacts of incidents involving hazardous materials such as chemical, biological, radiological, nuclear, and high explosive (CBRNE) agents, facilities, weapons, or transportation. Incident types range from accidental spillage of chemicals at/en route to/from a manufacturing plant, to the deliberate use of radiological or chemical material as a weapon in a crowded city. The impacts of these incidents are highly variable, from little or no impact to catastrophic loss of life and property. Local and regional scale atmospheric conditions strongly influence atmospheric transport and dispersion processes in the boundary layer, and the extent and scope of the spread of dangerous materials in the lower levels of the atmosphere. Therefore, CA personnel charged with managing the consequences of CBRNE incidents must have detailed knowledge of current and future weather conditions to accurately model potential effects. A meteorology team was established at the U.S. Defense Threat Reduction Agency (DTRA) to provide weather support to CA personnel operating DTRA's CA tools, such as the Hazard Prediction and Assessment Capability (HPAC) tool. The meteorology team performs three main functions: 1) regular provision of meteorological data for use by personnel using HPAC, 2) determination of the best performing medium-range model forecast for the 12 - 48 hour timeframe and 3) provision of real-time help-desk support to users regarding acquisition and use of weather in HPAC CA applications. The normal meteorology team operations were expanded during a recent modeling project which took place during the 2006 Winter Olympic Games. The meteorology team took advantage of special weather observation datasets available in the domain of the Winter Olympic venues and undertook a project to improve weather modeling at high resolution. The varied and complex terrain provided a special challenge to the
Combining an additive and tree-based regression model simultaneously: STIMA
Dusseldorp, E.; Conversano, C.; Os, B.J. van
2010-01-01
Additive models and tree-based regression models are two main classes of statistical models used to predict the scores on a continuous response variable. It is known that additive models become very complex in the presence of higher order interaction effects, whereas some tree-based models, such as
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Directory of Open Access Journals (Sweden)
Durand Eduard
2016-01-01
Full Text Available Along the river Loire, in order to have a homogenous method to do specific risk assessment studies, a new model named CARDigues (for Levee Breach Hazard Calculation was developed in a partnership with DREAL Centre-Val de Loire (owner of levees, Cerema and Irstea. This model enables to approach the probability of failure on every levee sections and to integrate and cross different “stability” parameters such topography and included structures, geology and material geotechnical characteristics, hydraulic loads… and observations of visual inspections or instrumentation results considered as disorders (seepage, burrowing animals, vegetation, pipes, etc.. This model and integrated tool CARDigues enables to check for each levee section, the probability of appearance and rupture of five breaching scenarios initiated by: overflowing, internal erosion, slope instability, external erosion and uplift. It has been recently updated and has been applied on several levee systems by different contractors. The article presents the CARDigues model principles and its recent developments (version V28.00 with examples on river Loire and how it is currently used for a relevant and global levee system diagnosis and assessment. Levee reinforcement or improvement management is also a perspective of applications for this model CARDigues.
Doubly stochastic models for volcanic hazard assessment at Campi Flegrei caldera
Bevilacqua, Andrea
2016-01-01
This study provides innovative mathematical models for assessing the eruption probability and associated volcanic hazards, and applies them to the Campi Flegrei caldera in Italy. Throughout the book, significant attention is devoted to quantifying the sources of uncertainty affecting the forecast estimates. The Campi Flegrei caldera is certainly one of the world’s highest-risk volcanoes, with more than 70 eruptions over the last 15,000 years, prevalently explosive ones of varying magnitude, intensity and vent location. In the second half of the twentieth century the volcano apparently once again entered a phase of unrest that continues to the present. Hundreds of thousands of people live inside the caldera and over a million more in the nearby city of Naples, making a future eruption of Campi Flegrei an event with potentially catastrophic consequences at the national and European levels.
Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model
Directory of Open Access Journals (Sweden)
Ge-Jin Chu
2014-01-01
Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Patra, A. K.; Connor, C.; Webley, P.; Jones, M.; Charbonnier, S. J.; Connor, L.; Gallo, S.; Bursik, M. I.; Valentine, G.; Hughes, C. G.; Aghakhani, H.; Renschler, C. S.; Kosar, T.
2014-12-01
We report here on an effort to improve the sustainability, robustness and usability of the core modeling and simulation tools housed in the collaboratory VHub.org and used in the study of complex volcanic behavior. In particular, we focus on tools that support large scale mass flows (TITAN2D), ash deposition/transport and dispersal (Tephra2 and PUFF), and lava flows (Lava2). These tools have become very popular in the community especially due to the availability of an online usage modality. The redevelopment of the tools ot take advantage of new hardware and software advances was a primary thrust for the effort. However, as we start work we have reoriented the effort to also take advantage of significant new opportunities for supporting the complex workflows and use of distributed data resources that will enable effective and efficient hazard analysis.
Risk assessment framework of fate and transport models applied to hazardous waste sites
Energy Technology Data Exchange (ETDEWEB)
Hwang, S.T.
1993-06-01
Risk assessment is an increasingly important part of the decision-making process in the cleanup of hazardous waste sites. Despite guidelines from regulatory agencies and considerable research efforts to reduce uncertainties in risk assessments, there are still many issues unanswered. This paper presents new research results pertaining to fate and transport models, which will be useful in estimating exposure concentrations and will help reduce uncertainties in risk assessment. These developments include an approach for (1) estimating the degree of emissions and concentration levels of volatile pollutants during the use of contaminated water, (2) absorption of organic chemicals in the soil matrix through the skin, and (3) steady state, near-field, contaminant concentrations in the aquifer within a waste boundary.
Modelling the impacts of coastal hazards on land-use development
Ramirez, J.; Vafeidis, A. T.
2009-04-01
Approximately 10% of the world's population live in close proximity to the coast and are potentially susceptible to tropical or extra-tropical storm-surge events. These events will be exacerbated by projected sea-level rise (SLR) in the 21st century. Accelerated SLR is one of the more certain impacts of global warming and can have major effects on humans and ecosystems. Of particular vulnerability are densely populated coastal urban centres containing globally important commercial resources, with assets in the billions USD. Moreover, the rates of growth of coastal populations, which are reported to be growing faster than the global means, are leading to increased human exposure to coastal hazards. Consequently, potential impacts of coastal hazards can be significant in the future and will depend on various factors but actual impacts can be considerably reduced by appropriate human decisions on coastal land-use management. At the regional scale, it is therefore necessary to identify which coastal areas are vulnerable to these events and explore potential long-term responses reflected in land usage. Land-use change modelling is a technique which has been extensively used in recent years for studying the processes and mechanisms that govern the evolution of land use and which can potentially provide valuable information related to the future coastal development of regions that are vulnerable to physical forcings. Although studies have utilized land-use classification maps to determine the impact of sea-level rise, few use land-use projections to make these assessments, and none have considered adaptive behaviour of coastal dwellers exposed to hazards. In this study a land-use change model, which is based on artificial neural networks (ANN), was employed for predicting coastal urban and agricultural development. The model uses as inputs a series of spatial layers, which include information on population distribution, transportation networks, existing urban centres, and
A fast, calibrated model for pyroclastic density currents kinematics and hazard
Esposti Ongaro, Tomaso; Orsucci, Simone; Cornolti, Fulvio
2016-11-01
Multiphase flow models represent valuable tools for the study of the complex, non-equilibrium dynamics of pyroclastic density currents. Particle sedimentation, flow stratification and rheological changes, depending on the flow regime, interaction with topographic obstacles, turbulent air entrainment, buoyancy reversal, and other complex features of pyroclastic currents can be simulated in two and three dimensions, by exploiting efficient numerical solvers and the improved computational capability of modern supercomputers. However, numerical simulations of polydisperse gas-particle mixtures are quite computationally expensive, so that their use in hazard assessment studies (where there is the need of evaluating the probability of hazardous actions over hundreds of possible scenarios) is still challenging. To this aim, a simplified integral (box) model can be used, under the appropriate hypotheses, to describe the kinematics of pyroclastic density currents over a flat topography, their scaling properties and their depositional features. In this work, multiphase flow simulations are used to evaluate integral model approximations, to calibrate its free parameters and to assess the influence of the input data on the results. Two-dimensional numerical simulations describe the generation and decoupling of a dense, basal layer (formed by progressive particle sedimentation) from the dilute transport system. In the Boussinesq regime (i.e., for solid mass fractions below about 0.1), the current Froude number (i.e., the ratio between the current inertia and buoyancy) does not strongly depend on initial conditions and it is consistent to that measured in laboratory experiments (i.e., between 1.05 and 1.2). For higher density ratios (solid mass fraction in the range 0.1-0.9) but still in a relatively dilute regime (particle volume fraction lower than 0.01), numerical simulations demonstrate that the box model is still applicable, but the Froude number depends on the reduced
Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.
2008-07-01
The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.
Simulated hazards of loosing infection-free status in a Dutch BHV1 model.
Vonk Noordegraaf, A; Labrovic, A; Frankena, K; Pfeiffer, D U; Nielen, M
2004-01-30
A compulsory eradication programme for bovine herpesvirus 1 (BHV1) was implemented in the Netherlands in 1998. At the start of the programme, about 25% of the dairy herds were certified BHV1-free. Simulation models have played an important role in the decision-making process associated with BHV1 eradication. Our objective in this study was to improve understanding of model behaviour (as part of internal validation) regarding loss by herds of the BHV1-free certificate. Using a Cox proportional hazards model, the association between farm characteristics and the risk of certificate loss during simulation was quantified. The overall fraction of herds experiencing certificate loss amongst initially certified during simulation was 3.0% in 6.5 years. Factors that increased risk for earlier certificate loss in the final multivariable Cox model were higher 'yearly number of cattle purchased', 'farm density within a 1 km radius' and 'cattle density within a 1 km radius'. Qualitative behaviour of risk factors we found agreed with observations in field studies.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Accelerated life models modeling and statistical analysis
Bagdonavicius, Vilijandas
2001-01-01
Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia
AschFlow - A dynamic landslide run-out model for medium scale hazard analysis.
Luna, Byron Quan; Blahut, Jan; van Asch, Theo; van Westen, Cees; Kappes, Melanie
2015-04-01
Landslides and debris flow hazard assessments require a scale-dependent analysis in order to mitigate damage and other negative consequences at the respective scales of occurrence. Medium or large scale landslide run-out modelling for many possible landslide initiation areas has been a cumbersome task in the past. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the run-out models to compute the displacement with a large amount of individual initiation areas (computational exhaustive). Most of the existing physically based run-out models have complications in handling such situations and therefore empirical methods have been used as a practical mean to predict landslides mobility at a medium scale (1:10,000 to 1:50,000). In this context, a simple medium scale numerical model for rapid mass movements in urban and mountainous areas was developed. The deterministic nature of the approach makes it possible to calculate the velocity, height and increase in mass by erosion, resulting in the estimation of various forms of impacts exerted by debris flows at the medium scale The established and implemented model ("AschFlow") is a 2-D one-phase continuum model that simulates, the entrainment, spreading and deposition process of a landslide or debris flow at a medium scale. The flow is thus treated as a single phase material, whose behavior is controlled by rheology (e.g. Voellmy or Bingham). The developed regional model "AschFlow" was applied and evaluated in well documented areas with known past debris flow events.
Directory of Open Access Journals (Sweden)
N. Diodato
2004-01-01
Full Text Available Damaging hydrogeomorphological events are defined as one or more simultaneous phenomena (e.g. accelerated erosions, landslides, flash floods and river floods, occurring in a spatially and temporal random way and triggered by rainfall with different intensity and extent. The storm rainfall values are highly dependent on weather condition and relief. However, the impact of rainstorms in Mediterranean mountain environments depend mainly on climatic fluctuations in the short and long term, especially in rainfall quantity. An algorithm for the characterisation of this impact, called Rainfall Hazard Index (RHI, is developed with a less expensive methodology. In RHI modelling, we assume that the river-torrential system has adapted to the natural hydrological regime, and a sudden fluctuation in this regime, especially those exceeding thresholds for an acceptable range of flexibility, may have disastrous consequences for the mountain environment. RHI integrate two rainfall variables based upon storm depth current and historical data, both of a fixed duration, and a one-dimensionless parameter representative of the degree ecosystem flexibility. The approach was applied to a test site in the Benevento river-torrential landscape, Campania (Southern Italy. So, a database including data from 27 events which have occurred during an 77-year period (1926-2002 was compared with Benevento-station RHI(24h, for a qualitative validation. Trends in RHIx for annual maximum storms of duration 1, 3 and 24h were also examined. Little change is observed at the 3- and 24-h duration of a storm, but a significant increase results in hazard of a short and intense storm (RHIx(1h, in agreement with a reduction in return period for extreme rainfall events.
Cellular parameters for track structure modelling of radiation hazard in space
Hollmark, M.; Lind, B.; Gudowska, I.; Waligorski, M.
Based on irradiation with 45 MeV/u N and B ions and with Co-60 gamma rays, track structure cellular parameters have been fitted for V 79-379A Chinese hamster lung fibroblasts and for human melanoma cells (AA wtp53). These sets of parameters will be used to develop a calculation of radiation hazard in deep space, based on the system for evaluating, summing and reporting occupational exposures proposed in 1967 by subcommittee of the NCRP, but never issued as an NCRP report. The key concepts of this system were: i) expression of the risk from all radiation exposures relative to that from a whole-body exposure to Co-60 radiation; ii) relating the risk from any exposure to that of the standard (Co-60) radiation through an "effectiveness factor" (ef), a product of sub-factors representing radiation quality, body region irradiated, and depth of penetration of radiation; the product of absorbed dose by ef being termed the "exposure record unit" (eru); iii) development of ef values and a cumulative eru record for external and internal emitters. Application of this concept should provide a better description of the Gy -equivalent presently in use by NASA for evaluating risk in deep space than the equivalent dose, following ICRP-60 recommendations. Dose and charged particle fluence levels encountered in space, particularly after Solar Particle Events, require that deterministic rather than stochastic effects be considered. Also, synergistic effects due to simultaneous multiple charged particle transfers, may have to be considered. Thus, models applicable in radiotherapy, where the Gy -equivalent is also applied, in conjunction with transport calculations performed using, e.g. the ADAM and EVA phantoms, along the concepts of the 1967 NCRP system, may be more appropriate for evaluating the radiation hazard from external fields with a large flux and a major high-LET component.
Energy Technology Data Exchange (ETDEWEB)
Harper, Bryan [Oregon State University (United States); Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan [Pacific Northwest National Laboratory (United States); Tang, Kaizhi [Intelligent Automation, Inc. (United States); Heredia-Langner, Alejandro [Pacific Northwest National Laboratory (United States); Lins, Roberto [CPqAM, Oswaldo Cruz Foundation, FIOCRUZ-PE (Brazil); Harper, Stacey, E-mail: stacey.harper@oregonstate.edu [Oregon State University (United States)
2015-06-15
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure–toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure–activity relationships.
A forecasting and forewarning model for methane hazard in working face of coal mine based on LS-SVM
Institute of Scientific and Technical Information of China (English)
CAO Shu-gang; LIU Yan-bao; WANG Yan-ping
2008-01-01
To improve the precision and reliability in predicting methane hazard in working face of coal mine, we have proposed a forecasting and forewarning model for methane hazard based on the least square support vector (LS-SVM) multi-classifier and regression machine. For the forecasting model, the methane concentration can be considered as a nonlinear time series and the time series analysis method is adopted to predict the change in methane concentration using LS-SVM regression. For the forewarning model, which is based on the forecasting results, by the multi-classification method of LS-SVM, the methane hazard was identified to four grades: normal, attention, warning and danger. According to the forewarning results, corresponding measures are taken. The model was used to forecast and forewarn the K9 working face. The results obtained by LS-SVM regression show that the forecast- ing have a high precision and forewarning results based on a LS-SVM multi-classifier are credible. Therefore, it is an effective model building method for continuous prediction of methane concentration and hazard forewarning in working face.
A general additive-multiplicative rates model for recurrent event data
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this article, we propose a general additive-multiplicative rates model for recurrent event data. The proposed model includes the additive rates and multiplicative rates models as special cases. For the inference on the model parameters, estimating equation approaches are developed, and asymptotic properties of the proposed estimators are established through modern empirical process theory. In addition, an illustration with multiple-infection data from a clinic study on chronic granulomatous disease is provided.
Validation analysis of probabilistic models of dietary exposure to food additives.
Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J
2003-10-01
The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.
Directory of Open Access Journals (Sweden)
Lois A Gelfand
2016-03-01
Full Text Available Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH and fully parametric accelerated failure time (AFT approaches for illustration.Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively under varied data conditions, some including censoring. A simulated data set illustrates the findings.Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome – underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo
2013-04-01
The Cordilleras in Peru are especially vulnerable to, and affected by impacts from climate change. Local communities and cities often exist directly within the reach of major hazard potentials such as lake outburst floods (aluviones), mud-/debris flows (huaycos) or large rock-/ice avalanches. They have been repeatedly and strongly affected these regions over the last decades and since the last century, and thousands of people have been killed. One of the most recent events in the Cordillera Blanca occurred on 11 April 2010, when a rock/ice avalanche from the top of Hualcán mountain, NE of the town of Carhuaz impacted the glacier lake 513 (Laguna 513), caused displacement waves and triggered an outburst flood wave. The flow repeatedly transformed from debris flow to hyperconcentrated flow and eventually caused significant damage in Carhuaz. This event was motivation to start early warning and prevention efforts to reduce risks related to ice/rock avalanches and glacier lake outburst floods (GLOF). One of the basic components of an early warning system is the assessment, understanding and communication of relevant hazards and risks. Here we report on the methodology and results of generating GLOF related hazard maps for Carhuaz based on numerical modeling and field work. This exercise required an advanced concept and implementation of different mass movement models. Specifically, numerical models were applied for simulating avalanche flow, avalanche lake impact, displacement wave generation and lake overtopping, and eventually flow propagation of the outburst flood with changing rheology between debris flow and hyperconcentrated flows. We adopted a hazard mapping procedure slightly adjusted adjusted from guidelines developed in Switzerland and in the Andes region. A methodology has thereby been developed to translate results from numerical mass movement modeling into hazard maps. The resulting hazard map was verified and adjusted during field work. This study shows
Kumar, Ashok; And Others
1989-01-01
Provides an overview of the Computer-Aided Management of Emergency Operations (CAMEO) model and its use in the classroom as a training tool in the "Hazardous Chemical Spills" course. Presents six problems illustrating classroom use of CAMEO. Lists 16 references. (YP)
Preparing a seismic hazard model for Switzerland: the view from PEGASOS Expert Group 3 (EG1c)
Energy Technology Data Exchange (ETDEWEB)
Musson, R. M. W. [British Geological Survey, West Mains Road, Edinburgh, EH9 3LA (United Kingdom); Sellami, S. [Swiss Seismological Service, ETH-Hoenggerberg, Zuerich (Switzerland); Bruestle, W. [Regierungspraesidium Freiburg, Abt. 9: Landesamt fuer Geologie, Rohstoffe und Bergbau, Ref. 98: Landeserdbebendienst, Freiburg im Breisgau (Germany)
2009-05-15
The seismic hazard model used in the PEGASOS project for assessing earth-quake hazard at four NPP sites was a composite of four sub-models, each produced by a team of three experts. In this paper, one of these models is described in detail by the authors. A criticism sometimes levelled at probabilistic seismic hazard studies is that the process by which seismic source zones are arrived at is obscure, subjective and inconsistent. Here, we attempt to recount the stages by which the model evolved, and the decisions made along the way. In particular, a macro-to-micro approach was used, in which three main stages can be described. The first was the characterisation of the overall kinematic model, the 'big picture' of regional seismo-genesis. Secondly, this was refined to a more detailed seismotectonic model. Lastly, this was used as the basis of individual sources, for which parameters can be assessed. Some basic questions had also to be answered about aspects of the approach to modelling to be used: for instance, is spatial smoothing an appropriate tool to apply? Should individual fault sources be modelled in an intra-plate environment? Also, the extent to which alternative modelling decisions should be expressed in a logic tree structure has to be considered. (author)
Survival prediction based on compound covariate under Cox proportional hazard models.
Directory of Open Access Journals (Sweden)
Takeshi Emura
Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.
Institute of Scientific and Technical Information of China (English)
王鹭; 张利; 王学芝
2015-01-01
As the central component of rotating machine, the performance reliability assessment and remaining useful lifetime prediction of bearing are of crucial importance in condition-based maintenance to reduce the maintenance cost and improve the reliability. A prognostic algorithm to assess the reliability and forecast the remaining useful lifetime (RUL) of bearings was proposed, consisting of three phases. Online vibration and temperature signals of bearings in normal state were measured during the manufacturing process and the most useful time-dependent features of vibration signals were extracted based on correlation analysis (feature selection step). Time series analysis based on neural network, as an identification model, was used to predict the features of bearing vibration signals at any horizons (feature prediction step). Furthermore, according to the features, degradation factor was defined. The proportional hazard model was generated to estimate the survival function and forecast the RUL of the bearing (RUL prediction step). The positive results show that the plausibility and effectiveness of the proposed approach can facilitate bearing reliability estimation and RUL prediction.
Seismic hazard studies in Egypt
Mohamed, Abuo El-Ela A.; El-Hadidy, M.; Deif, A.; Abou Elenean, K.
2012-12-01
The study of earthquake activity and seismic hazard assessment of Egypt is very important due to the great and rapid spreading of large investments in national projects, especially the nuclear power plant that will be held in the northern part of Egypt. Although Egypt is characterized by low seismicity, it has experienced occurring of damaging earthquake effect through its history. The seismotectonic sitting of Egypt suggests that large earthquakes are possible particularly along the Gulf of Aqaba-Dead Sea transform, the Subduction zone along the Hellenic and Cyprean Arcs, and the Northern Red Sea triple junction point. In addition some inland significant sources at Aswan, Dahshour, and Cairo-Suez District should be considered. The seismic hazard for Egypt is calculated utilizing a probabilistic approach (for a grid of 0.5° × 0.5°) within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for four ground motion spectral periods and for different return periods. In addition, the uniform hazard spectra for rock sites for different 25 periods, and the probabilistic hazard curves for Cairo, and Alexandria cities are graphed. The peak ground acceleration (PGA) values were found close to the Gulf of Aqaba and it was about 220 gal for 475 year return period. While the lowest (PGA) values were detected in the western part of the western desert and it is less than 25 gal.
Maechling, P. J.; Jordan, T. H.; Kesselman, C.; Moore, R.; Minster, B.; SCEC ITR Collaboration
2003-12-01
The Southern California Earthquake Center (SCEC) has formed a Geoscience/IT partnership to develop an advanced information infrastructure for system-level earthquake science in Southern California. This SCEC/ITR partnership comprises SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. This collaboration recently completed the second year in a five-year National Science Foundation (NSF) funded ITR project called the SCEC Community Modeling Environment (SCEC/CME). The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed by project collaborators include a Probabilistic Seismic Hazard Analysis system called OpenSHA [Field et al., this meeting]. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERF's). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. A Rupture Dynamic Model (RDM) has also been developed that couples a rupture dynamics simulation into an anelastic wave model. The collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of SHA programs. To support computationally expensive simulations, we have constructed a grid-based system utilizing Globus software [Kesselman et al., this meeting]. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC, NPACI and Teragrid High Performance Computing Centers. We have
Quentel, E.; Loevenbruck, A.; Hébert, H.
2012-04-01
The catastrophic 2004 tsunami drew the international community's attention to tsunami risk in all basins where tsunamis occurred but no warning system exists. Consequently, under the coordination of UNESCO, France decided to create a regional center, called CENALT, for the north-east Atlantic and the western Mediterranean. This warning system, which should be operational by 2012, is set up by the CEA in collaboration with the SHOM and the CNRS. The French authorities are in charge of the top-down alert system including the local alert dissemination. In order to prepare the appropriate means and measures, they initiated the ALDES (Alerte Descendante) project to which the CEA also contributes. It aims at examining along the French Mediterranean coast the tsunami risk related to earthquakes and landslides. In addition to the evaluation at regional scale, it includes the detailed studies of 3 selected sites; the local alert system will be designed for one of them : the French Riviera. In this project, our main task at CEA consists in assessing tsunami hazard related to seismic sources using numerical modeling. Past tsunamis have affected the west Mediterranean coast but are too few and poorly documented to provide a suitable database. Thus, a synthesis of earthquakes representative of the tsunamigenic seismic activity and prone to induce the largest impact to the French coast is performed based on historical data, seismotectonics and first order models. The North Africa Margin, the Ligurian and the South Tyrrhenian Seas are considered as the main tsunamigenic zones. In order to forecast the most important plausible effects, the magnitudes are estimated by enhancing to some extent the largest known values. Our hazard estimation is based on the simulation of the induced tsunamis scenarios performed with the CEA code. The 3 sites have been chosen according to the regional hazard studies, coastal typology elements and the appropriate DTMs (Digital Terrain Models). The
A class of additive-accelerated means regression models for recurrent event data
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this article, we propose a class of additive-accelerated means regression models for analyzing recurrent event data. The class includes the proportional means model, the additive rates model, the accelerated failure time model, the accelerated rates model and the additive-accelerated rate model as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the model parameters, estimating equation approaches are derived and asymptotic properties of the proposed estimators are established. In addition, a technique is provided for model checking. The finite-sample behavior of the proposed methods is examined through Monte Carlo simulation studies, and an application to a bladder cancer study is illustrated.
Eble, M. C.; uslu, B. U.; Wright, L.
2013-12-01
Synthetic tsunamis generated from source regions around the Pacific Basin are analyzed in terms of their relative impact on United States coastal locations.. The region of tsunami origin is as important as the expected magnitude and the predicted inundation for understanding tsunami hazard. The NOAA Center for Tsunami Research has developed high-resolution tsunami models capable of predicting tsunami arrival time and amplitude of waves at each location. These models have been used to conduct tsunami hazard assessments to assess maximum impact and tsunami inundation for use by local communities in education and evacuation map development. Hazard assessment studies conducted for Los Angeles, San Francisco, Crescent City, Hilo, and Apra Harbor are combined with results of tsunami forecast model development at each of seventy-five locations. Complete hazard assessment, identifies every possible tsunami variation from a pre-computed propagation database. Study results indicate that the Eastern Aleutian Islands and Alaska are the most likely regions to produce the largest impact on the West Coast of the United States, while the East Philippines and Mariana trench regions impact Apra Harbor, Guam. Hawaii appears to be impacted equally from South America, Alaska and the Kuril Islands.
Multiple High-Fidelity Modeling Tools for Metal Additive Manufacturing Process Development Project
National Aeronautics and Space Administration — Despite the rapid commercialization of additive manufacturing technology such as selective laser melting, SLM, there are gaps in process modeling and material...
Della Seta, Marta; Marotta, Enrica; Orsi, Giovanni; de Vita, Sandro; Sansivero, Fabio; Fredi, Paola
2012-01-01
Ischia is an active volcanic island in the Gulf of Naples whose history has been dominated by a caldera-forming eruption (ca. 55 ka) and resurgence phenomena that have affected the caldera floor and generated a net uplift of about 900 m since 33 ka. The results of new geomorphological, stratigraphical and textural investigations of the products of gravitational movements triggered by volcano-tectonic events have been combined with the information arising from a reinterpretation of historical chronicles on natural phenomena such as earthquakes, ground deformation, gravitational movements and volcanic eruptions. The combined interpretation of all these data shows that gravitational movements, coeval to volcanic activity and uplift events related to the long-lasting resurgence, have affected the highly fractured marginal portions of the most uplifted Mt. Epomeo blocks. Such movements, mostly occurring since 3 ka, include debris avalanches; large debris flows (lahars); smaller mass movements (rock falls, slumps, debris and rock slides, and small debris flows); and deep-seated gravitational slope deformation. The occurrence of submarine deposits linked with subaerial deposits of the most voluminous mass movements clearly shows that the debris avalanches impacted on the sea. The obtained results corroborate the hypothesis that the behaviour of the Ischia volcano is based on an intimate interplay among magmatism, resurgence dynamics, fault generation, seismicity, slope oversteepening and instability, and eruptions. They also highlight that volcano-tectonically triggered mass movements are a potentially hazardous phenomena that have to be taken into account in any attempt to assess volcanic and related hazards at Ischia. Furthermore, the largest mass movements could also flow into the sea, generating tsunami waves that could impact on the island's coast as well as on the neighbouring and densely inhabited coast of the Neapolitan area.
Abdelnoor, M; Hauge, S N; Hall, K V
1986-01-01
An alternative approach to the study of the follow-up of patients with heart prostheses is the use of the reliability theory (hazard function) and proportional hazard model (Cox's model). In a population of 480 patients who underwent AVR in the period from June 1977 to January 1983, with a mean follow-up time of 2.8 years, 16 preoperative variables were considered. From this pool of variables, six entered the regression model in a time-independent mode. These were age at operation, sex, preoperative NYHA classification, presence of AI, presence of endocarditis and presence of atrial fibrillation on ECG, none of which entered the model in the time-related mode. Another multifactorial approach, using a stepwise regression analysis to examine primary predictive factors that independently correlate with survival, while simultaneously accounting for the other previous variables, showed that the variables with additive prognostic value were age at operation, presence of AI and presence of endocarditis. Based on this model, a forecast five-year survival rate ranging from 88 to 14 per cent was found at the end of the fifth year. For the most favourable and the worst combinations of these prognostic variables, a patient-specific forecast five-year survival rate was drawn up. Our results were compared, using univariate and multivariate methods, with the results found in the literature, and the implications of this comparison were discussed.
Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling
Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.
2014-01-01
Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, p<.01) and Loss of Privileges (AOR=0.71, p<.10) were significant in reducing the rate of the reoccurrence of bullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779
Examining school-based bullying interventions using multilevel discrete time hazard modeling.
Ayers, Stephanie L; Wagaman, M Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E C
2012-10-01
Although schools have been trying to address bullying by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K - 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR = 0.65, p connection between the students' mesosystems as well as utilizing disciplinary strategies that take into consideration student's microsystem roles.
Simulating floods : On the application of a 2D-hydraulic model for flood hazard and risk assessment
Alkema, D.
2007-01-01
Over the last decades, river floods in Europe seem to occur more frequently and are causing more and more economic and emotional damage. Understanding the processes causing flooding and the development of simulation models to evaluate countermeasures to control that damage are important issues. This study deals with the application of a 2D hydraulic flood propagation model for flood hazard and risk assessment. It focuses on two components: 1) how well does it predict the spatial-dynamic chara...
Seufzer, William J.
2014-01-01
Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.
Stiffness Model of a 3-DOF Parallel Manipulator with Two Additional Legs
Directory of Open Access Journals (Sweden)
Guang Yu
2014-10-01
Full Text Available This paper investigates the stiffness modelling of a 3-DOF parallel manipulator with two additional legs. The stiffness model in six directions of the 3-DOF parallel manipulator with two additional legs is derived by performing condensation of DOFs for the joint connection and treatment of the fixed-end connections. Moreover, this modelling method is used to derive the stiffness model of the manipulator with zero/one additional legs. Two performance indices are given to compare the stiffness of the parallel manipulators with two additional legs with those of the manipulators with zero/one additional legs. The method not only can be used to derive the stiffness model of a redundant parallel manipulator, but also to model the stiffness of non-redundant parallel manipulators.
Modeling lifetime data with multiple causes using cause specific reversed hazard rates
Directory of Open Access Journals (Sweden)
Paduthol Godan Sankaran
2014-09-01
Full Text Available In this paper we introduce and study cause specific reversed hazard rates in the context of left censored lifetime data with multiple causes. Nonparametric inference procedure for left censored lifetime data with multiple causes using cause specific reversed hazard rate is discussed. Asymptotic properties of the estimators are studied. Simulation studies are conducted to assess the efficiency of the estimators. Further, the proposed method is applied to mice mortality data (Hoel 1972 and Australian twin data (Duffy et al. 1990.
Sørensen, Mathilde Bøttger
2006-01-01
Seismic hazard assessment has an important societal impact in describing levels of ground motions to be expected in a given region in the future. Challenges in seismic hazard assessment are closely associated with the fact that different regions, due to their differences in seismotectonics setting (and hence in earthquake occurrence) as well as socioeconomic conditions, require different and innovative approaches. One of the most important aspects in this regard is the seismici...
Directory of Open Access Journals (Sweden)
P. Horton
2013-04-01
Full Text Available The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM. The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate.
Combining observations and model simulations to reduce the hazard of Etna volcanic ash plumes
Scollo, Simona; Boselli, Antonella; Coltelli, Mauro; Leto, Giuseppe; Pisani, Gianluca; Prestifilippo, Michele; Spinelli, Nicola; Wang, Xuan; Zanmar Sanchez, Ricardo
2014-05-01
Etna is one of the most active volcanoes in the world with a recent activity characterized by powerful lava fountains that produce several kilometres high eruption columns and disperse volcanic ash in the atmosphere. It is well known that, to improve the volcanic ash dispersal forecast of an ongoing explosive eruption, input parameters used by volcanic ash dispersal models should be measured during the eruption. In this work, in order to better quantify the volcanic ash dispersal, we use data from the video-surveillance system of Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo, and from the lidar system together with a volcanic ash dispersal model. In detail, the visible camera installed in Catania, 27 km from the vent is able to evaluate the evolution of column height with time. The Lidar, installed at the "M.G. Fracastoro" astrophysical observatory (14.97° E, 37.69° N) of the Istituto Nazionale di Astrofisica in Catania, located at a distance of 7 km from the Etna summit craters, uses a frequency doubled Nd:YAG laser source operating at a 532-nm wavelength, with a repetition rate of 1 kHz. Backscattering and depolarization values measured by the Lidar system can give, with a certain degree of uncertainty, an estimation of volcanic ash concentration in atmosphere. The 12 August 2011 activity is considered a perfect test case because volcanic plume was retrieved by both camera and Lidar. We evaluated the mass eruption rate from the column height and used best fit procedures comparing simulated volcanic ash concentrations with those extracted by the Lidar data. During this event, powerful lava fountains were well visible at about 08:30 GMT and a sustained eruption column was produced since about 08:55 GMT. Ash emission completely ceased around 11:30 GMT. The proposed approach is an attempt to produce more robust ash dispersal forecasts reducing the hazard to air traffic during Etna volcanic crisis.
A spatiotemporal optimization model for the evacuation of the population exposed to flood hazard
Alaeddine, H.; Serrhini, K.; Maizia, M.
2015-03-01
Managing the crisis caused by natural disasters, and especially by floods, requires the development of effective evacuation systems. An effective evacuation system must take into account certain constraints, including those related to traffic network, accessibility, human resources and material equipment (vehicles, collecting points, etc.). The main objective of this work is to provide assistance to technical services and rescue forces in terms of accessibility by offering itineraries relating to rescue and evacuation of people and property. We consider in this paper the evacuation of an urban area of medium size exposed to the hazard of flood. In case of inundation, most people will be evacuated using their own vehicles. Two evacuation types are addressed in this paper: (1) a preventive evacuation based on a flood forecasting system and (2) an evacuation during the disaster based on flooding scenarios. The two study sites on which the developed evacuation model is applied are the Tours valley (Fr, 37), which is protected by a set of dikes (preventive evacuation), and the Gien valley (Fr, 45), which benefits from a low rate of flooding (evacuation before and during the disaster). Our goal is to construct, for each of these two sites, a chronological evacuation plan, i.e., computing for each individual the departure date and the path to reach the assembly point (also called shelter) according to a priority list established for this purpose. The evacuation plan must avoid the congestion on the road network. Here we present a spatiotemporal optimization model (STOM) dedicated to the evacuation of the population exposed to natural disasters and more specifically to flood risk.
Probabilistic Seismic Hazard Assessment for Taiwan
Directory of Open Access Journals (Sweden)
Yu-Ju Wang
2016-06-01
Full Text Available The Taiwan Earthquake Model (TEM was established to assess the seismic hazard and risk for Taiwan by considering the social and economic impacts of various components from geology, seismology, and engineering. This paper gives the first version of TEM probabilistic seismic hazard analysis for Taiwan in these aspects. We named it TEM PSHA2015. The model adopts the source parameters of 38 seismogenic structures identified by TEM geologists. In addition to specific fault source-based categorization, seismic activities are categorized as shallow, subduction intraplate, and subduction interplate events. To evaluate the potential ground-shaking resulting from each seismic source, the corresponding ground-motion prediction equations for crustal and subduction earthquakes are adopted. The highest hazard probability is evaluated to be in Southwestern Taiwan and the Longitudinal Valley of Eastern Taiwan. Among the special municipalities in the highly populated Western Taiwan region, Taichung, Tainan, and New Taipei City are evaluated to have the highest hazard. Tainan has the highest seismic hazard for peak ground acceleration in the model based on TEM fault parameters. In terms of pseudo-spectral acceleration, Tainan has higher hazard over short spectral periods, whereas Taichung has higher hazard over long spectral periods. The analysis indicates the importance of earthquake-resistant designs for low-rise buildings in Tainan and high-rise buildings in Taichung.
Dankers, Rutger; Arnell, Nigel W.; Clark, Douglas B.; Falloon, Pete D.; Fekete, Balázs M.; Gosling, Simon N.; Heinke, Jens; Kim, Hyungjun; Masaki, Yoshimitsu; Satoh, Yusuke; Stacke, Tobias; Wada, Yoshihide; Wisser, Dominik
2014-03-01
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydrograph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
Zhang, Baoqing; Wu, Pute; Zhao, Xining; Wang, Yubao; Gao, Xiaodong; Cao, Xinchun
2013-10-01
Drought is a complex natural hazard that is poorly understood and difficult to assess. This paper describes a VIC-PDSI model approach to understanding drought in which the Variable Infiltration Capacity (VIC) Model was combined with the Palmer Drought Severity Index (PDSI). Simulated results obtained using the VIC model were used to replace the output of the more conventional two-layer bucket-type model for hydrological accounting, and a two-class-based procedure for calibrating the characteristic climate coefficient ( K j ) was introduced to allow for a more reliable computation of the PDSI. The VIC-PDSI model was used in conjunction with GIS technology to create a new drought assessment index (DAI) that provides a comprehensive overview of drought duration, intensity, frequency, and spatial extent. This new index was applied to drought hazard assessment across six subregions of the whole Loess Plateau. The results show that the DAI over the whole Loess Plateau ranged between 11 and 26 (the greater value of the DAI means the more severe of the drought hazard level). The drought hazards in the upper reaches of Yellow River were more severe than that in the middle reaches. The drought prone regions over the study area were mainly concentrated in Inner Mongolian small rivers, Zuli and Qingshui Rivers basin, while the drought hazards in the drainage area between Hekouzhen-Longmen and Weihe River basin were relatively mild during 1971-2010. The most serious drought vulnerabilities were associated with the area around Lanzhou, Zhongning, and Yinchuan, where the development of water-saving irrigation is the most direct and effective way to defend against and reduce losses from drought. For the relatively humid regions, it will be necessary to establish the rainwater harvesting systems, which could help to relieve the risk of water shortage and guarantee regional food security. Due to the DAI considers the multiple characteristic of drought duration, intensity, frequency
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
Stewart, R. B.; Grose, W. L.
1975-01-01
Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.
Tappin, David R.
2015-04-01
the resolution necessary to identify the hazard from landslides, particularly along convergent margins where this hazard is the greatest. Multibeam mapping of the deep seabed requires low frequency sound sources that, because of their corresponding low resolution, cannot produce the detail required to identify the finest scale features. In addition, outside of most countries, there are not the repeat surveys that allow seabed changes to be identified. Perhaps only japan has this data. In the near future as research budgets shrink and ship time becomes ever expensive new strategies will have to be used to make best use of the vessels available. Remote AUV technology is almost certainly the answer, and should be increasingly utilised to map the seabed while the mother ship is better used to carry out other duties, such as sampling or seismic data acquisition. This will have the advantage in the deep ocean of acquiring higher resolution data from high frequency multibeams. This talk presents on a number of projects that show the evolution of the use of MBES in mapping submarine landslides since the PNG tsunami. Data from PNG is presented, together with data from Japan, Hawaii and the NE Atlantic. New multibeam acquisition methodologies are also discussed.
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
Pagano, Alessandro; Pluchinotta, Irene; Giordano, Raffaele; Vurro, Michele
2016-04-01
Resilience has recently become a key concept, and a crucial paradigm in the analysis of the impacts of natural disasters, mainly concerning Lifeline Systems (LS). Indeed, the traditional risk management approaches require a precise knowledge of all potential hazards and a full understanding of the interconnections among different infrastructures, based on past events and trends analysis. Nevertheless, due to the inner complexity of LS, their interconnectedness and the dynamic context in which they operate (i.e. technology, economy and society), it is difficult to gain a complete comprehension of the processes influencing vulnerabilities and threats. Therefore, resilience thinking addresses the complexities of large integrated systems and the uncertainty of future threats, emphasizing the absorbing, adapting and responsive behavior of the system. Resilience thinking approaches are focused on the capability of the system to deal with the unforeseeable. The increasing awareness of the role played by LS, has led governmental agencies and institutions to develop resilience management strategies. Risk prone areas, such as cities, are highly dependent on infrastructures providing essential services that support societal functions, safety, economic prosperity and quality of life. Among the LS, drinking water supply is critical for supporting citizens during emergency and recovery, since a disruption could have a range of serious societal impacts. A very well-known method to assess LS resilience is the TOSE approach. The most interesting feature of this approach is the integration of four dimensions: Technical, Organizational, Social and Economic. Such issues are all concurrent to the resilience level of an infrastructural system, and should be therefore quantitatively assessed. Several researches underlined that the lack of integration among the different dimensions, composing the resilience concept, may contribute to a mismanagement of LS in case of natural disasters
Modeling Lahar Hazard Zones for Eruption-Generated Lahars from Lassen Peak, California
Robinson, J. E.; Clynne, M. A.
2010-12-01
Lassen Peak, a high-elevation, seasonally snow-covered peak located within Lassen Volcanic National Park, has lahar deposits in several drainages that head on or near the lava dome. This suggests that these drainages are susceptible to future lahars. The majority of the recognized lahar deposits are related to the May 19 and 22, 1915 eruptions of Lassen Peak. These small-volume eruptions generated lahars and floods when an avalanche of snow and hot rock, and a pyroclastic flow moved across the snow-covered upper flanks of the lava dome. Lahars flowed to the north down Lost Creek and Hat Creek. In Lost Creek, the lahars flowed up to 16 km downstream and deposited approximately 8.3 x 106 m3 of sediment. This study uses geologic mapping of the 1915 lahar deposits as a guide for LAHARZ modeling to assist in the assessment of present-day susceptibility for lahars in drainages heading on Lassen Peak. The LAHARZ model requires a Height over Length (H/L) energy cone controlling the initiation point of a lahar. We chose a H/L cone with a slope of 0.3 that intersects the earth’s surface at the break in slope at the base of the volcanic dome. Typically, the snow pack reaches its annual maximum by May. Average and maximum May snow-water content, a depth of water equal to 2.1 m and 3.5 m respectively, were calculated from a local snow gauge. A potential volume for individual 1915 lahars was calculated using the deposit volume, the snow-water contents, and the areas stripped of snow by the avalanche and pyroclastic flow. The calculated individual lahars in Lost Creek ranged in size from 9 x 106 m3 to 18.4 x 106 m3. These volumes modeled in LAHARZ matched the 1915 lahars remarkably well, with the modeled flows ending within 4 km of the mapped deposits. We delineated six drainage basins that head on or near Lassen Peak with the highest potential for lahar hazards: Lost Creek, Hat Creek, Manzanita Creek, Mill Creek, Warner Creek, and Bailey Creek. We calculated the area of each
Stanley, Dal; Villaseñor, Antonio; Benz, Harley
1999-01-01
The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This
Directory of Open Access Journals (Sweden)
S. Khare
2014-08-01
Full Text Available In this paper, we present a novel framework for modelling clustering in natural hazard risk models. The framework we present is founded on physical principles where large-scale oscillations in the physical system is the source of non-Poissonian (clustered frequency behaviour. We focus on a particular mathematical implementation of the "Super-Cluster" methodology that we introduce. This mathematical framework has a number of advantages including tunability to the problem at hand, as well as the ability to model cross-event correlation. Using European windstorm data as an example, we provide evidence that historical data show strong evidence of clustering. We then develop Poisson and clustered simulation models for the data, demonstrating clearly the superiority of the clustered model which we have implemented using the Poisson-Mixtures approach. We then discuss the implications of including clustering in models of prices on catXL contracts, one of the most commonly used mechanisms for transferring risk between primary insurers and reinsurers. This paper provides a number of new insights into the impact clustering has on modelled catXL contract prices. The simple model presented in this paper provides an insightful starting point for practicioners of natural hazard risk modelling.
Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.
David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C
2009-08-01
The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.
Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko
2016-10-01
The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.
Assessing End-Of-Supply Risk of Spare Parts Using the Proportional Hazard Model
X. Li (Xishu); R. Dekker (Rommert); C. Heij (Christiaan); M. Hekimoğlu (Mustafa)
2016-01-01
textabstractOperators of long field-life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end-of-supply evaluation me
Modelling risk in high hazard operations: integrating technical, organisational and cultural factors
Ale, B.J.M.; Hanea, D.M.; Sillem, S.; Lin, P.H.; Van Gulijk, C.; Hudson, P.T.W.
2012-01-01
Recent disasters in high hazard industries such as Oil and Gas Exploration (The Deepwater Horizon) and Petrochemical production (Texas City) have been found to have causes that range from direct technical failures through organizational shortcomings right up to weak regulation and inappropriate comp
DEFF Research Database (Denmark)
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;
2013-01-01
, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...
Effects of additional food in a delayed predator-prey model.
Sahoo, Banshidhar; Poria, Swarup
2015-03-01
We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Degree of multicollinearity and variables involved in linear dependence in additive-dominant models
Directory of Open Access Journals (Sweden)
Juliana Petrini
2012-12-01
Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.
Institute of Scientific and Technical Information of China (English)
FENG Wenlan; ZHOU Qigang; ZHANG Baolei; ZHOU Wancun; LI Ainong; ZHANG Haizhen; XIAN Wei
2006-01-01
By analyzing the topographic features of past landslides since 1980s and the main land-cover types (including change information) in landslide-prone area, modeled spatial distribution of landslide hazard in upper Minjiang River Basin was studied based on spatial analysis of GIS in this paper. Results of GIS analysis showed that landslide occurrence in this region closely related to topographic feature. Most areas with high hazard probability were deep-sheared gorge. Most of them in investigation occurred assembly in areas with elevation lower than 3 000 m, due to fragile topographic conditions and intensive human disturbances. Land-cover type, including its change information, was likely an important environmental factor to trigger landslide. Destroy of vegetation driven by increase of population and its demands augmented the probability of landslide in steep slope.
A Fault-based Crustal Deformation Model for UCERF3 and Its Implication to Seismic Hazard Analysis
Zeng, Y.; Shen, Z.
2012-12-01
shear zone and northern Walker Lane. This implies a significant increase in seismic hazard in the eastern California and northern Walker Lane region, but decreased seismic hazard in the southern San Andreas area, relative to the current model used in the USGS 2008 seismic hazard map evaluation. Overall the geodetic model suggests an increase in total regional moment rate of 24% compared with the UCERF2 model and the 150-yr California earthquake catalog. However not all the increases are seismic so the seismic/aseismic slip rate ratios are critical for future seismic hazard assessment.
Fitzgerald, R. H.; Tsunematsu, K.; Kennedy, B. M.; Breard, E. C. P.; Lube, G.; Wilson, T. M.; Jolly, A. D.; Pawson, J.; Rosenberg, M. D.; Cronin, S. J.
2014-10-01
On 6 August, 2012, Upper Te Maari Crater, Tongariro volcano, New Zealand, erupted for the first time in over one hundred years. Multiple vents were activated during the hydrothermal eruption, ejecting blocks up to 2.3 km and impacting ~ 2.6 km of the Tongariro Alpine Crossing (TAC) hiking track. Ballistic impact craters were mapped to calibrate a 3D ballistic trajectory model for the eruption. This was further used to inform future ballistic hazard. Orthophoto mapping revealed 3587 impact craters with a mean diameter of 2.4 m. However, field mapping of accessible regions indicated an average of at least four times more observable impact craters and a smaller mean crater diameter of 1.2 m. By combining the orthophoto and ground-truthed impact frequency and size distribution data, we estimate that approximately 13,200 ballistic projectiles were generated during the eruption. The 3D ballistic trajectory model and a series of inverse models were used to constrain the eruption directions, angles and velocities. When combined with eruption observations and geophysical observations, the model indicates that the blocks were ejected in five variously directed eruption pulses, in total lasting 19 s. The model successfully reproduced the mapped impact distribution using a mean initial particle velocity of 200 m/s with an accompanying average gas flow velocity over a 400 m radius of 150 m/s. We apply the calibrated model to assess ballistic hazard from the August eruption along the TAC. By taking the field mapped spatial density of impacts and an assumption that an average ballistic impact will cause serious injury or death (casualty) over an 8 m2 area, we estimate that the probability of casualty ranges from 1% to 16% along the affected track (assuming an eruption during the time of exposure). Future ballistic hazard and probabilities of casualty along the TAC are also assessed through application of the calibrated model. We model a magnitude larger eruption and illustrate
[Proportional hazards model of birth intervals among marriage cohorts since the 1960s].
Otani, K
1987-01-01
With a view to investigating the possibility of an attitudinal change towards the timing of 1st and 2nd births, proportional hazards model analysis of the 1st and 2nd birth intervals and univariate life table analysis were both carried out. Results showed that love matches and conjugal families immediately after marriage are accompanied by a longer 1st birth interval than others, even after controlling for other independent variables. Marriage cohort analysis also shows a net effect on the relative risk of having a 1st birth. Marriage cohorts since the mid-1960s demonstrate a shorter 1st birth interval than the 1961-63 cohort. With regard to the 2nd birth interval, longer 1st birth intervals, arranged marriages, conjugal families immediately following marriage, and higher ages at 1st marriage of women tended to provoke a longer 2nd birth interval. There is no interaction between the 1st birth interval and marriage cohort. Once other independent variables were controlled, with the exception of the marriage cohorts of the early 1970s, the authors found no effect of marriage cohort on the relative risk of having a 2nd birth. This suggests that an attitudinal change towards the timing of births in this period was mainly restricted to that of a 1st birth. Fluctuations in the 2nd birth interval during the 1970-72 marriage cohort were scrutinized in detail. As a result, the authors found that conjugal families after marriage, wives with low educational status, women with husbands in white collar professions, women with white collar fathers, and wives with high age at 1st marriage who married during 1970-72 and had a 1st birth interval during 1972-74 suffered most from the pronounced rise in the 2nd birth interval. This might be due to the relatively high sensitivity to a change in socioeconomic status; the oil crisis occurring around the time of marriage and 1st birth induced a delay in the 2nd birth. The unanimous decrease in the 2nd birth interval among the 1973
Directory of Open Access Journals (Sweden)
Daniel Asare-Kyei
2015-07-01
Full Text Available Robust risk assessment requires accurate flood intensity area mapping to allow for the identification of populations and elements at risk. However, available flood maps in West Africa lack spatial variability while global datasets have resolutions too coarse to be relevant for local scale risk assessment. Consequently, local disaster managers are forced to use traditional methods such as watermarks on buildings and media reports to identify flood hazard areas. In this study, remote sensing and Geographic Information System (GIS techniques were combined with hydrological and statistical models to delineate the spatial limits of flood hazard zones in selected communities in Ghana, Burkina Faso and Benin. The approach involves estimating peak runoff concentrations at different elevations and then applying statistical methods to develop a Flood Hazard Index (FHI. Results show that about half of the study areas fall into high intensity flood zones. Empirical validation using statistical confusion matrix and the principles of Participatory GIS show that flood hazard areas could be mapped at an accuracy ranging from 77% to 81%. This was supported with local expert knowledge which accurately classified 79% of communities deemed to be highly susceptible to flood hazard. The results will assist disaster managers to reduce the risk to flood disasters at the community level where risk outcomes are first materialized.
A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects
Directory of Open Access Journals (Sweden)
Hölzel Dieter
2009-02-01
Full Text Available Abstract Background Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. Methods MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Results Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. Conclusion The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.
ADDITIVE-MULTIPLICATIVE MODEL FOR RISK ESTIMATION IN THE PRODUCTION OF ROCKET AND SPACE TECHNICS
Directory of Open Access Journals (Sweden)
Orlov A. I.
2014-10-01
Full Text Available For the first time we have developed a general additive-multiplicative model of the risk estimation (to estimate the probabilities of risk events. In the two-level system in the lower level the risk estimates are combined additively, on the top – in a multiplicative way. Additive-multiplicative model was used for risk estimation for (1 implementation of innovative projects at universities (with external partners, (2 the production of new innovative products, (3 the projects for creation of rocket and space equipmen
Energy Technology Data Exchange (ETDEWEB)
Antoun, T; Harris, D; Lay, T; Myers, S C; Pasyanos, M E; Richards, P; Rodgers, A J; Walter, W R; Zucca, J J
2008-02-11
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes a path by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas.
DEFF Research Database (Denmark)
Wu, Jing; Zhang, Laibin; Lind, Morten;
2013-01-01
HAZOP studies are widely accepted in chemical and petroleum industries as the method for conducting process hazard analysis related to design, maintenance and operation of the systems. Different tools have been developed to automate HAZOP studies. In this paper, a HAZOP reasoning method based...... on function-oriented modeling, Multilevel Flow Modeling (MFM), is extended with function roles. A graphical MFM editor, which is combined with the reasoning capabilities of the MFM Workbench developed by DTU is applied to automate HAZOP studies. The method is proposed to support the “brain-storming” sessions...
21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.
2010-04-01
... Provisions § 123.6 Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan. (a) Hazard... safety hazards, critical control points, critical limits, and procedures required to be identified and... color additives; and (ix) Physical hazards; (2) List the critical control points for each of...
A Nonparametric Bayesian Approach to Seismic Hazard Modeling Using the ETAS Framework
Ross, G.
2015-12-01
The epidemic-type aftershock sequence (ETAS) model is one of the most popular tools for modeling seismicity and quantifying risk in earthquake-prone regions. Under the ETAS model, the occurrence times of earthquakes are treated as a self-exciting Poisson process where each earthquake briefly increases the probability of subsequent earthquakes occurring soon afterwards, which captures the fact that large mainshocks tend to produce long sequences of aftershocks. A triggering kernel controls the amount by which the probability increases based on the magnitude of each earthquake, and the rate at which it then decays over time. This triggering kernel is usually chosen heuristically, to match the parametric form of the modified Omori law for aftershock decay. However recent work has questioned whether this is an appropriate choice. Since the choice of kernel has a large impact on the predictions made by the ETAS model, avoiding misspecification is crucially important. We present a novel nonparametric version of ETAS which avoids making parametric assumptions, and instead learns the correct specification from the data itself. Our approach is based on the Dirichlet process, which is a modern class of Bayesian prior distribution which allows for efficient inference over an infinite dimensional space of functions. We show how our nonparametric ETAS model can be fit to data, and present results demonstrating that the fit is greatly improved compared to the standard parametric specification. Additionally, we explain how our model can be used to perform probabilistic declustering of earthquake catalogs, to classify earthquakes as being either aftershocks or mainshocks. and to learn the causal relations between pairs of earthquakes.
Chifflard, Peter; Tilch, Nils
2010-05-01
Introduction Hydrological or geomorphological processes in nature are often very diverse and complex. This is partly due to the regional characteristics which vary over time and space, as well as changeable process-initiating and -controlling factors. Despite being aware of this complexity, such aspects are usually neglected in the modelling of hazard-related maps due to several reasons. But particularly when it comes to creating more realistic maps, this would be an essential component to consider. The first important step towards solving this problem would be to collect data relating to regional conditions which vary over time and geographical location, along with indicators of complex processes. Data should be acquired promptly during and after events, and subsequently digitally combined and analysed. Study area In June 2009, considerable damage occurred in the residential area of Klingfurth (Lower Austria) as a result of great pre-event wetness and repeatedly heavy rainfall, leading to flooding, debris flow deposit and gravitational mass movement. One of the causes is the fact that the meso-scale watershed (16 km²) of the Klingfurth stream is characterised by adverse geological and hydrological conditions. Additionally, the river system network with its discharge concentration within the residential zone contributes considerably to flooding, particularly during excessive rainfall across the entire region, as the flood peaks from different parts of the catchment area are superposed. First results of mapping Hydro(geo)logical surveys across the entire catchment area have shown that - over 600 gravitational mass movements of various type and stage have occurred. 516 of those have acted as a bed load source, while 325 mass movements had not reached the final stage yet and could thus supply bed load in the future. It should be noted that large mass movements in the initial or intermediate stage were predominately found in clayey-silty areas and weathered material
Eeuwijk, van F.A.
1996-01-01
In plant breeding it is a common observation to see genotypes react differently to environmental changes. This phenomenon is called genotype by environment interaction. Many statistical approaches for analysing genotype by environment interaction rely heavily on the analysis of variance model. Genot
An original traffic additional emission model and numerical simulation on a signalized road
Zhu, Wen-Xing; Zhang, Jing-Yu
2017-02-01
Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.
Li, Shuli; Gray, Robert J
2016-09-01
We consider methods for estimating the treatment effect and/or the covariate by treatment interaction effect in a randomized clinical trial under noncompliance with time-to-event outcome. As in Cuzick et al. (2007), assuming that the patient population consists of three (possibly latent) subgroups based on treatment preference: the ambivalent group, the insisters, and the refusers, we estimate the effects among the ambivalent group. The parameters have causal interpretations under standard assumptions. The article contains two main contributions. First, we propose a weighted per-protocol (Wtd PP) estimator through incorporating time-varying weights in a proportional hazards model. In the second part of the article, under the model considered in Cuzick et al. (2007), we propose an EM algorithm to maximize a full likelihood (FL) as well as the pseudo likelihood (PL) considered in Cuzick et al. (2007). The E step of the algorithm involves computing the conditional expectation of a linear function of the latent membership, and the main advantage of the EM algorithm is that the risk parameters can be updated by fitting a weighted Cox model using standard software and the baseline hazard can be updated using closed-form solutions. Simulations show that the EM algorithm is computationally much more efficient than directly maximizing the observed likelihood. The main advantage of the Wtd PP approach is that it is more robust to model misspecifications among the insisters and refusers since the outcome model does not impose distributional assumptions among these two groups.
Chanturiya, Valentine; Masloboev, Vladimir; Makarov, Dmitriy; Mazukhina, Svetlana; Nesterov, Dmitriy; Men'shikov, Yuriy
2011-01-01
Laboratory tests and physical-chemical modeling have determined that mixtures of activated silica and carbonatite, serpophite and carbonatite show considerable promise for developing artificial geochemical barriers. The obtained average contents of nickel and copper deposited on geochemical barriers in the formed mining induced ores are acceptable for their subsequent cost efficient processing using either pyro- or hydrometallurgy methods. Some tests of geochemical barriers have been carried out, involving the use of polluted water in the impact zone of the "Kol'skaya GMK" JSC. A possibility of water purification from heavy metals down to the MAC level for fishery water bodies has been displayed.
Roulleau, Louise; Bétard, François; Carlier, Benoît; Lissak, Candide; Fort, Monique
2016-04-01
Landslides are common natural hazards in the Southern French Alps, where they may affect human lives and cause severe damages to infrastructures. As a part of the SAMCO research project dedicated to risk evaluation in mountain areas, this study focuses on the Guil river catchment (317 km2), Queyras, to assess landslide hazard poorly studied until now. In that area, landslides are mainly occasional, low amplitude phenomena, with limited direct impacts when compared to other hazards such as floods or snow avalanches. However, when interacting with floods during extreme rainfall events, landslides may have indirect consequences of greater importance because of strong hillslope-channel connectivity along the Guil River and its tributaries (i.e. positive feedbacks). This specific morphodynamic functioning reinforces the need to have a better understanding of landslide hazards and their spatial distribution at the catchment scale to prevent local population from disasters with multi-hazard origin. The aim of this study is to produce a landslide susceptibility mapping at 1:50 000 scale as a first step towards global estimation of landslide hazard and risk. The three main methodologies used for assessing landslide susceptibility are qualitative (i.e. expert opinion), deterministic (i.e. physics-based models) and statistical methods (i.e. probabilistic models). Due to the rapid development of geographical information systems (GIS) during the last two decades, statistical methods are today widely used because they offer a greater objectivity and reproducibility at large scales. Among them, multivariate analyses are considered as the most robust techniques, especially the logistic regression method commonly used in landslide susceptibility mapping. However, this method like others is strongly dependent on the accuracy of the input data to avoid significant errors in the final results. In particular, a complete and accurate landslide inventory is required before the modelling
Yang, Xiaobao; Huan, Mei; Abdel-Aty, Mohamed; Peng, Yichuan; Gao, Ziyou
2015-01-01
This paper presents a hazard-based duration approach to investigate riders' waiting times, violation hazards, associated risk factors, and their differences between cyclists and electric bike riders at signalized intersections. A total of 2322 two-wheeled riders approaching the intersections during red light periods were observed in Beijing, China. The data were classified into censored and uncensored data to distinguish between safe crossing and red-light running behavior. The results indicated that the red-light crossing behavior of most riders was dependent on waiting time. They were inclined to terminate waiting behavior and run against the traffic light with the increase of waiting duration. Over half of the observed riders cannot endure 49s or longer. 25% of the riders can endure 97s or longer. Rider type, gender, waiting position, conformity tendency and crossing traffic volume were identified to have significant effects on riders' waiting times and violation hazards. Electric bike riders were found to be more sensitive to the external risk factors such as other riders' crossing behavior and crossing traffic volume than cyclists. Moreover, unobserved heterogeneity was examined in the proposed models. The finding of this paper can explain when and why cyclists and electric bike riders run against the red light at intersections. The results of this paper are useful for traffic design and management agencies to implement strategies to enhance the safety of riders.
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Cross, Paul C.; Maichak, Eric J.; Rogerson, Jared D.; Irvine, Kathryn M.; Jones, Jennifer D; Heisey, Dennis M.; Edwards, William H.; Scurlock, Brandon M.
2015-01-01
Understanding the seasonal timing of disease transmission can lead to more effective control strategies, but the seasonality of transmission is often unknown for pathogens transmitted directly. We inserted vaginal implant transmitters (VITs) in 575 elk (Cervus elaphus canadensis) from 2006 to 2014 to assess when reproductive failures (i.e., abortions or still births) occur, which is the primary transmission route of Brucella abortus, the causative agent of brucellosis in the Greater Yellowstone Ecosystem. Using a survival analysis framework, we developed a Bayesian hierarchical model that simultaneously estimated the total baseline hazard of a reproductive event as well as its 2 mutually exclusive parts (abortions or live births). Approximately, 16% (95% CI = 0.10, 0.23) of the pregnant seropositive elk had reproductive failures, whereas 2% (95% CI = 0.01, 0.04) of the seronegative elk had probable abortions. Reproductive failures could have occurred as early as 13 February and as late as 10 July, peaking from March through May. Model results suggest that less than 5% of likely abortions occurred after 6 June each year and abortions were approximately 5 times more likely in March, April, or May compared to February or June. In western Wyoming, supplemental feeding of elk begins in December and ends during the peak of elk abortions and brucellosis transmission (i.e., Mar and Apr). Years with more snow may enhance elk-to-elk transmission on supplemental feeding areas because elk are artificially aggregated for the majority of the transmission season. Elk-to-cattle transmission will depend on the transmission period relative to the end of the supplemental feeding season, elk seroprevalence, population size, and the amount of commingling. Our statistical approach allowed us to estimate the probability density function of different event types over time, which may be applicable to other cause-specific survival analyses. It is often challenging to assess the
Gay, Emilie; Senoussi, Rachid; Barnouin, Jacques
2007-01-01
Methods for spatial cluster detection dealing with diseases quantified by continuous variables are few, whereas several diseases are better approached by continuous indicators. For example, subclinical mastitis of the dairy cow is evaluated using a continuous marker of udder inflammation, the somatic cell score (SCS). Consequently, this study proposed to analyze spatialized risk and cluster components of herd SCS through a new method based on a spatial hazard model. The dataset included annual SCS for 34 142 French dairy herds for the year 2000, and important SCS risk factors: mean parity, percentage of winter and spring calvings, and herd size. The model allowed the simultaneous estimation of the effects of known risk factors and of potential spatial clusters on SCS, and the mapping of the estimated clusters and their range. Mean parity and winter and spring calvings were significantly associated with subclinical mastitis risk. The model with the presence of 3 clusters was highly significant, and the 3 clusters were attractive, i.e. closeness to cluster center increased the occurrence of high SCS. The three localizations were the following: close to the city of Troyes in the northeast of France; around the city of Limoges in the center-west; and in the southwest close to the city of Tarbes. The semi-parametric method based on spatial hazard modeling applies to continuous variables, and takes account of both risk factors and potential heterogeneity of the background population. This tool allows a quantitative detection but assumes a spatially specified form for clusters.
Seismic hazard studies in Egypt
Directory of Open Access Journals (Sweden)
Abuo El-Ela A. Mohamed
2012-12-01
Full Text Available The study of earthquake activity and seismic hazard assessment of Egypt is very important due to the great and rapid spreading of large investments in national projects, especially the nuclear power plant that will be held in the northern part of Egypt. Although Egypt is characterized by low seismicity, it has experienced occurring of damaging earthquake effect through its history. The seismotectonic sitting of Egypt suggests that large earthquakes are possible particularly along the Gulf of Aqaba–Dead Sea transform, the Subduction zone along the Hellenic and Cyprean Arcs, and the Northern Red Sea triple junction point. In addition some inland significant sources at Aswan, Dahshour, and Cairo-Suez District should be considered. The seismic hazard for Egypt is calculated utilizing a probabilistic approach (for a grid of 0.5° × 0.5° within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for four ground motion spectral periods and for different return periods. In addition, the uniform hazard spectra for rock sites for different 25 periods, and the probabilistic hazard curves for Cairo, and Alexandria cities are graphed. The peak ground acceleration (PGA values were found close to the Gulf of Aqaba and it was about 220 gal for 475 year return period. While the lowest (PGA values were detected in the western part of the western desert and it is less than 25 gal.
Structured Additive Regression Models: An R Interface to BayesX
Directory of Open Access Journals (Sweden)
Nikolaus Umlauf
2015-02-01
Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.
Zahran, Hani M.; Sokolov, Vladimir; Roobol, M. John; Stewart, Ian C. F.; El-Hadidy Youssef, Salah; El-Hadidy, Mahmoud
2016-07-01
A new seismic source model has been developed for the western part of the Arabian Peninsula, which has experienced considerable earthquake activity in the historical past and in recent times. The data used for the model include an up-to-date seismic catalog, results of recent studies of Cenozoic faulting in the area, aeromagnetic anomaly and gravity maps, geological maps, and miscellaneous information on volcanic activity. The model includes 18 zones ranging along the Red Sea and the Arabian Peninsula from the Gulf of Aqaba and the Dead Sea in the north to the Gulf of Aden in the south. The seismic source model developed in this study may be considered as one of the basic branches in a logic tree approach for seismic hazard assessment in Saudi Arabia and adjacent territories.
A guide to generalized additive models in crop science using SAS and R
Directory of Open Access Journals (Sweden)
Josefine Liew
2015-06-01
Full Text Available Linear models and generalized linear models are well known and are used extensively in crop science. Generalized additive models (GAMs are less well known. GAMs extend generalized linear models through inclusion of smoothing functions of explanatory variables, e.g., spline functions, allowing the curves to bend to better describe the observed data. This article provides an introduction to GAMs in the context of crop science experiments. This is exemplified using a dataset consisting of four populations of perennial sow-thistle (Sonchus arvensis L., originating from two regions, for which emergence of shoots over time was compared.
The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies
DEFF Research Database (Denmark)
Scheike, Thomas; Martinussen, T; Zhang, MJ
2011-01-01
leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...
Dryginin, N. V.; Krasnoveikin, V. A.; Filippov, A. V.; Tarasov, S. Yu.; Rubtsov, V. E.
2016-11-01
Additive manufacturing by 3D printing is the most advanced and promising trend for making the multicomponent composites. Polymer-based carbon fiber reinforced composites demonstrate high mechanical properties combined with low weight characteristics of the component. This paper shows the results of 3D modeling and experimental modal analysis on a polymer composite framework obtained using additive manufacturing. By the example of three oscillation modes it was shown the agreement between the results of modeling and experimental modal analysis with the use of laser Doppler vibrometry.
Energy Technology Data Exchange (ETDEWEB)
Voglar, Grega E. [RDA - Regional Development Agency Celje, Kidriceva ulica 25, 3000 Celje (Slovenia); Lestan, Domen, E-mail: domen.lestan@bf.uni-lj.si [Agronomy Department, Centre for Soil and Environmental Science, Biotechnical Faculty, University of Ljubljana, Jamnikarjeva 101, 1000 Ljubljana (Slovenia)
2011-08-30
Highlights: {yields} We assess the feasibility of using soil S/S for industrial land reclamation. {yields} Retarders, accelerators, plasticizers were used in S/S cementitious formulation. {yields} We proposed novel S/S efficiency model for multi-metal contaminated soils. - Abstract: In a laboratory study, formulations of 15% (w/w) of ordinary Portland cement (OPC), calcium aluminate cement (CAC) and pozzolanic cement (PC) and additives: plasticizers cementol delta ekstra (PCDE) and cementol antikorodin (PCA), polypropylene fibers (PPF), polyoxyethylene-sorbitan monooleate (Tween 80) and aqueous acrylic polymer dispersion (Akrimal) were used for solidification/stabilization (S/S) of soils from an industrial brownfield contaminated with up to 157, 32,175, 44,074, 7614, 253 and 7085 mg kg{sup -1} of Cd, Pb, Zn, Cu, Ni and As, respectively. Soils formed solid monoliths with all cementitious formulations tested, with a maximum mechanical strength of 12 N mm{sup -2} achieved after S/S with CAC + PCA. To assess the S/S efficiency of the used formulations for multi-element contaminated soils, we propose an empirical model in which data on equilibrium leaching of toxic elements into deionized water and TCLP (toxicity characteristic leaching procedure) solution and the mass transfer of elements from soil monoliths were weighed against the relative potential hazard of the particular toxic element. Based on the model calculation, the most efficient S/S formulation was CAC + Akrimal, which reduced soil leachability of Cd, Pb, Zn, Cu, Ni and As into deionized water below the limit of quantification and into TCLP solution by up to 55, 185, 8750, 214, 4.7 and 1.2-times, respectively; and the mass transfer of elements from soil monoliths by up to 740, 746, 104,000, 4.7, 343 and 181-times, respectively.
Friedel, M.J.
2011-01-01
Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios. ?? 2011.
Hydrothermal Liquefaction Treatment Preliminary Hazard Analysis Report
Energy Technology Data Exchange (ETDEWEB)
Lowry, Peter P.; Wagner, Katie A.
2015-08-31
A preliminary hazard assessment was completed during February 2015 to evaluate the conceptual design of the modular hydrothermal liquefaction treatment system. The hazard assessment was performed in 2 stages. An initial assessment utilizing Hazard Identification and Preliminary Hazards Analysis (PHA) techniques identified areas with significant or unique hazards (process safety-related hazards) that fall outside of the normal operating envelope of PNNL and warranted additional analysis. The subsequent assessment was based on a qualitative What-If analysis. This analysis was augmented, as necessary, by additional quantitative analysis for scenarios involving a release of hazardous material or energy with the potential for affecting the public.
Energy Technology Data Exchange (ETDEWEB)
Doligez, B.; Eschard, R. [Institut Francais du Petrole, Rueil Malmaison (France); Geffroy, F. [Centre de Geostatistique, Fontainebleau (France)] [and others
1997-08-01
The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
Fan, Jianqing; Song, Rui
2011-01-01
A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data a...
New results in RR Lyrae modeling: convective cycles, additional modes and more
Molnár, L; Szabó, R; Plachy, E
2012-01-01
Recent theoretical and observational findings breathed new life into the field of RR Lyrae stars. The ever more precise and complete measurements of the space asteroseismology missions revealed new details, such as the period doubling and the presence of the additional modes in the stars. Theoretical work also flourished: period doubling was explained and an additional mode has been detected in hydrodynamic models as well. Although the most intriguing mystery, the Blazhko-effect has remained unsolved, new findings indicate that the convective cycle model can be effectively ruled out for short- and medium-period modulations. On the other hand, the plausibility of the radial resonance model is increasing, as more and more resonances are detected both in models and stars.
Zeng, Y.; Shen, Z.; Harmsen, S.; Petersen, M. D.
2010-12-01
We invert GPS observations to determine the slip rates on major faults in California based on a kinematic fault model of crustal deformation with geological slip rate constraints. Assuming an elastic half-space, we interpret secular surface deformation using a kinematic fault network model with each fault segment slipping beneath a locking depth. This model simulates both block-like deformation and elastic strain accumulation within each bounding block. Each fault segment is linked to its adjacent elements with slip continuity imposed at fault nodes or intersections. The GPS observations across California and its neighbors are obtained from the SCEC WGCEP project of California Crustal Motion Map version 1.0 and SCEC Crustal Motion Map 4.0. Our fault models are based on the SCEC UCERF 2.0 fault database, a previous southern California block model by Shen and Jackson, and the San Francisco Bay area block model by d’Alessio et al. Our inversion shows a slip rate ranging from 20 to 26 mm/yr for the northern San Andreas from the Santa Cruz Mountain to the Peninsula segment. Slip rates vary from 8 to 14 mm/yr along the Hayward to the Maacama segment, and from 17 to 6 mm/yr along the central Calaveras to West Napa. For the central California creeping section, we find a depth dependent slip rate with an average slip rate of 23 mm/yr across the upper 5 km and 30 mm/yr underneath. Slip rates range from 30 mm/yr along the Parkfield and central California creeping section of the San Andres to an average of 6 mm/yr on the San Bernardino Mountain segment. On the southern San Andreas, slip rates vary from 21 to 30 mm/yr from the Cochella Valley to the Imperial Valley, and from 7 to 16 mm/yr along the San Jacinto segments. The shortening rate across the greater Los Angeles region is consistent with the regional tectonics and crustal thickening in the area. We are now in the process of applying the result to seismic hazard evaluation. Overall the geodetic and geological derived
Energy Technology Data Exchange (ETDEWEB)
Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
Leonard, Graham S.; Stewart, Carol; Wilson, Thomas M.; Procter, Jonathan N.; Scott, Bradley J.; Keys, Harry J.; Jolly, Gill E.; Wardman, Johnny B.; Cronin, Shane J.; McBride, Sara K.
2014-10-01
New Zealand's Tongariro National Park volcanoes produce hazardous eruptions every few years to decades. On 6 August 2012 the Te Maari vent of Tongariro Volcano erupted, producing a series of explosions and a fine ash of minor volume which was dispersed rapidly to the east. This manuscript presents a summary of the eruption impacts and the way these supported science communication during the crisis, particularly in terms of hazard map development. The most significant proximal impact was damage from pyroclastic surges and ballistics to the popular and economically-important Tongariro Alpine Crossing track. The only hazard to affect the medial impact zone was a few mms of ashfall with minor impacts. Field testing indicated that the Te Maari ash had extremely low resistivity when wetted, implying a very high potential to cause disruption to nationally-important power transmission networks via the mechanism of insulator flashover. This was not observed, presumably due to insufficient ash accumulation on insulators. Virtually no impacts from distal ashfall were reported. Post-event analysis of PM10 data demonstrates the additional value of regional air quality monitoring networks in quantifying population exposure to airborne respirable ash. While the eruption was minor, it generated a high level of public interest and a demand for information on volcanic hazards and impacts from emergency managers, the public, critical infrastructure managers, health officials, and the agriculture sector. Meeting this demand fully taxed available resources. We present here aspects of the New Zealand experience which may have wider applicability in moving towards improved integration of hazard impact information, mapping, and communication. These include wide use of a wiki technical clearinghouse and email listservs, a focus on multi-agency consistent messages, and a recently developed environment of collaboration and alignment of both research funding and technical science advice
Zhang, Xiaolong; Li, Liang; Pan, Deng; Cao, Chengmao; Song, Jian
2014-03-01
The current research of real-time observation for vehicle roll steer angle and compliance steer angle(both of them comprehensively referred as the additional steer angle in this paper) mainly employs the linear vehicle dynamic model, in which only the lateral acceleration of vehicle body is considered. The observation accuracy resorting to this method cannot meet the requirements of vehicle real-time stability control, especially under extreme driving conditions. The paper explores the solution resorting to experimental method. Firstly, a multi-body dynamic model of a passenger car is built based on the ADAMS/Car software, whose dynamic accuracy is verified by the same vehicle's roadway test data of steady static circular test. Based on this simulation platform, several influencing factors of additional steer angle under different driving conditions are quantitatively analyzed. Then ɛ-SVR algorithm is employed to build the additional steer angle prediction model, whose input vectors mainly include the sensor information of standard electronic stability control system(ESC). The method of typical slalom tests and FMVSS 126 tests are adopted to make simulation, train model and test model's generalization performance. The test result shows that the influence of lateral acceleration on additional steer angle is maximal (the magnitude up to 1°), followed by the longitudinal acceleration-deceleration and the road wave amplitude (the magnitude up to 0.3°). Moreover, both the prediction accuracy and the calculation real-time of the model can meet the control requirements of ESC. This research expands the accurate observation methods of the additional steer angle under extreme driving conditions.
Baruffi, F; Cisotto, A; Cimolino, A; Ferri, M; Monego, M; Norbiato, D; Cappelletto, M; Bisaglia, M; Pretner, A; Galli, A; Scarinci, A; Marsala, V; Panelli, C; Gualdi, S; Bucchignani, E; Torresan, S; Pasini, S; Critto, A; Marcomini, A
2012-12-01
Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life+ project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced
Vector generalized linear and additive models with an implementation in R
Yee, Thomas W
2015-01-01
This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable. The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...
Modeling the use of sulfate additives for potassium chloride destruction in biomass combustion
DEFF Research Database (Denmark)
Wu, Hao; Grell, Morten Nedergaard; Jespersen, Jacob Boll;
2013-01-01
was affected by the decomposition temperature. Based on the experimental data, a model was proposed to simulate the sulfation of KCl by different sulfate addition, and the simulation results were compared with pilot-scale experiments conducted in a bubbling fluidized bed reactor. The simulation results...
Midrapidity inclusive densities in high energy pp collisions in additive quark model
Shabelski, Yu. M.; Shuvaev, A. G.
2016-08-01
High energy (CERN SPS and LHC) inelastic pp (pbar{p}) scattering is treated in the framework of the additive quark model together with Pomeron exchange theory. We extract the midrapidity inclusive density of the charged secondaries produced in a single quark-quark collision and investigate its energy dependence. Predictions for the π p collisions are presented.
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Additional interfacial force in lattice Boltzmann models for incompressible multiphase flows
Li, Q; Gao, Y J
2011-01-01
The existing lattice Boltzmann models for incompressible multiphase flows are mostly constructed with two distribution functions, one is the order parameter distribution function, which is used to track the interface between different phases, and the other is the pressure distribution function for solving the velocity field. In this brief report, it is shown that in these models the recovered momentum equation is inconsistent with the target one: an additional interfacial force is included in the recovered momentum equation. The effects of the additional force are investigated by numerical simulations of droplet splashing on a thin liquid film and falling droplet under gravity. In the former test, it is found that the formation and evolution of secondary droplets are greatly affected, while in the latter the additional force is found to increase the falling velocity and limit the stretch of the droplet.
CalTOX, a multimedia total exposure model for hazardous-waste sites; Part 1, Executive summary
Energy Technology Data Exchange (ETDEWEB)
McKone, T.E.
1993-06-01
CalTOX has been developed as a spreadsheet model to assist in health-risk assessments that address contaminated soils and the contamination of adjacent air, surface water, sediments, and ground water. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify and reduce uncertainty in multimedia, multiple-pathway exposure models. This report provides an overview of the CalTOX model components, lists the objectives of the model, describes the philosophy under which the model was developed, identifies the chemical classes for which the model can be used, and describes critical sensitivities and uncertainties. The multimedia transport and transformation model is a dynamic model that can be used to assess time-varying concentrations of contaminants introduced initially to soil layers or for contaminants released continuously to air or water. This model assists the user in examining how chemical and landscape properties impact both the ultimate route and quantity of human contact. Multimedia, multiple pathway exposure models are used in the CalTOX model to estimate average daily potential doses within a human population in the vicinity of a hazardous substances release site. The exposure models encompass twenty-three exposure pathways. The exposure assessment process consists of relating contaminant concentrations in the multimedia model compartments to contaminant concentrations in the media with which a human population has contact (personal air, tap water, foods, household dusts soils, etc.). The average daily dose is the product of the exposure concentrations in these contact media and an intake or uptake factor that relates the concentrations to the distributions of potential dose within the population.
Energy Technology Data Exchange (ETDEWEB)
McKone, T.E.
1994-01-01
Risk assessment is a quantitative evaluation of information on potential health hazards of environmental contaminants and the extent of human exposure to these contaminants. As applied to toxic chemical emissions to air, risk assessment involves four interrelated steps. These are (1) determination of source concentrations or emission characteristics, (2) exposure assessment, (3) toxicity assessment, and (4) risk characterization. These steps can be carried out with assistance from analytical models in order to estimate the potential risk associated with existing and future releases. CAirTOX has been developed as a spreadsheet model to assist in making these types of calculations. CAirTOX follows an approach that has been incorporated into the CalTOX model, which was developed for the California Department of Toxic Substances Control, With CAirTOX, we can address how contaminants released to an air basin can lead to contamination of soil, food, surface water, and sediments. The modeling effort includes a multimedia transport and transformation model, exposure scenario models, and efforts to quantify uncertainty in multimedia, multiple-pathway exposure assessments. The capacity to explicitly address uncertainty has been incorporated into the model in two ways. First, the spreadsheet form of the model makes it compatible with Monte-Carlo add-on programs that are available for uncertainty analysis. Second, all model inputs are specified in terms of an arithmetic mean and coefficient of variation so that uncertainty analyses can be carried out.
Implementation of NGA-West2 ground motion models in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Harmsen, Stephen C.; Frankel, Arthur D.
2014-01-01
The U.S. National Seismic Hazard Maps (NSHMs) have been an important component of seismic design regulations in the United States for the past several decades. These maps present earthquake ground shaking intensities at specified probabilities of being exceeded over a 50-year time period. The previous version of the NSHMs was developed in 2008; during 2012 and 2013, scientists at the U.S. Geological Survey have been updating the maps based on their assessment of the “best available science,” resulting in the 2014 NSHMs. The update includes modifications to the seismic source models and the ground motion models (GMMs) for sites across the conterminous United States. This paper focuses on updates in the Western United States (WUS) due to the use of new GMMs for shallow crustal earthquakes in active tectonic regions developed by the Next Generation Attenuation (NGA-West2) project. Individual GMMs, their weighted combination, and their impact on the hazard maps relative to 2008 are discussed. In general, the combined effects of lower medians and increased standard deviations in the new GMMs have caused only small changes, within 5–20%, in the probabilistic ground motions for most sites across the WUS compared to the 2008 NSHMs.
Sasaki, Osamu; Aihara, Mitsuo; Hagiya, Koichi; Nishiura, Akiko; Ishii, Kazuo; Satoh, Masahiro
2012-02-01
The objective of this study was to confirm the stability of the genetic estimation of longevity of the Holstein population in Japan. Data on the first 10 lactation periods were obtained from the Livestock Improvement Association of Japan. Longevity was defined as the number of days from first calving until culling or censoring. DATA1 and DATA2 included the survival records for the periods 1991-2003 and 1991-2005, respectively. The proportional hazard model included the effects of the region-parity-lactation stage-milk yield class, age at first calving, the herd-year-season, and sire. The heritabilities on an original scale of DATA1 and DATA2 were 0.119 and 0.123, respectively. The estimated transmitting abilities (ETAs) of young sires in DATA1 may have been underestimated, but coefficient δ, which indicated the bias of genetic trend between DATA1 and DATA2, was not significant. The regression coefficient of ETAs between DATA1 and DATA2 was very close to 1. The proportional hazard model could steadily estimate the ETA for longevity of the sires in Japan.
Automated Standard Hazard Tool
Stebler, Shane
2014-01-01
The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.
Liu, Baoyin; Siu, Yim Ling; Mitchell, Gordon
2016-03-01
This paper develops a systematic hazard interaction classification based on the geophysical environment that natural hazards arise from - the hazard-forming environment. According to their contribution to natural hazards, geophysical environmental factors in the hazard-forming environment were categorized into two types. The first are relatively stable factors which construct the precondition for the occurrence of natural hazards, whilst the second are trigger factors, which determine the frequency and magnitude of hazards. Different combinations of geophysical environmental factors induce different hazards. Based on these geophysical environmental factors for some major hazards, the stable factors are used to identify which kinds of natural hazards influence a given area, and trigger factors are used to classify the relationships between these hazards into four types: independent, mutex, parallel and series relationships. This classification helps to ensure all possible hazard interactions among different hazards are considered in multi-hazard risk assessment. This can effectively fill the gap in current multi-hazard risk assessment methods which to date only consider domino effects. In addition, based on this classification, the probability and magnitude of multiple interacting natural hazards occurring together can be calculated. Hence, the developed hazard interaction classification provides a useful tool to facilitate improved multi-hazard risk assessment.
Formation and reduction of carcinogenic furan in various model systems containing food additives.
Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun
2015-12-15
The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate.
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-01-01
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294
Sun, Yan; Lang, Maoxiang; Wang, Danzhu
2016-07-28
The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing-Tianjin-Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study.
Miller, Craig A.; Williams-Jones, Glyn
2016-06-01
A new 3D geophysical model of the Mt Tongariro Volcanic Massif (TgVM), New Zealand, provides a high resolution view of the volcano's internal structure and hydrothermal system, from which we derive implications for volcanic hazards. Geologically constrained 3D inversions of potential field data provides a greater level of insight into the volcanic structure than is possible from unconstrained models. A complex region of gravity highs and lows (± 6 mGal) is set within a broader, ~ 20 mGal gravity low. A magnetic high (1300 nT) is associated with Mt Ngauruhoe, while a substantial, thick, demagnetised area occurs to the north, coincident with a gravity low and interpreted as representing the hydrothermal system. The hydrothermal system is constrained to the west by major faults, interpreted as an impermeable barrier to fluid migration and extends to basement depth. These faults are considered low probability areas for future eruption sites, as there is little to indicate they have acted as magmatic pathways. Where the hydrothermal system coincides with steep topographic slopes, an increased likelihood of landslides is present and the newly delineated hydrothermal system maps the area most likely to have phreatic eruptions. Such eruptions, while small on a global scale, are important hazards at the TgVM as it is a popular hiking area with hundreds of visitors per day in close proximity to eruption sites. The model shows that the volume of volcanic material erupted over the lifespan of the TgVM is five to six times greater than previous estimates, suggesting a higher rate of magma supply, in line with global rates of andesite production. We suggest that our model of physical property distribution can be used to provide constraints for other models of dynamic geophysical processes occurring at the TgVM.
DEFF Research Database (Denmark)
Scheike, Thomas Harder
2002-01-01
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model...... for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance...... estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different...
Open Source Software for Mapping Human Impacts on Marine Ecosystems with an Additive Model
Directory of Open Access Journals (Sweden)
Andy Stock
2016-06-01
Full Text Available This paper describes an easy-to-use open source software tool implementing a commonly used additive model (Halpern et al., 'Science', 2008 for mapping human impacts on marine ecosystems. The tool has been used to map the potential for cumulative human impacts in Arctic marine waters and can support future human impact mapping projects by 1 making the model easier to use; 2 making updates of model results straightforward when better input data become available; 3 storing input data and information about processing steps in a defined format and thus facilitating data sharing and reproduction of modeling results; 4 supporting basic visualization of model inputs and outputs without the need for advanced technical skills. The tool, called EcoImpactMapper, was implemented in Java and is thus platform-independent. A tutorial, example data, the tool and the source code are available online.
EFFECT OF NANOPOWDER ADDITION ON THE FLEXURAL STRENGTH OF ALUMINA CERAMIC - A WEIBULL MODEL ANALYSIS
Directory of Open Access Journals (Sweden)
Daidong Guo
2016-05-01
Full Text Available Alumina ceramics were prepared either with micrometer-sized alumina powder (MAP or with the addition of nanometer-sized alumina powder (NAP. The density, crystalline phase, flexural strength and the fracture surface of the two ceramics were measured and compared. Emphasis has been put on the influence of nanopowder addition on the flexural strength of Al₂O₃ ceramic. The analysis based on the Weibull distribution model suggests the distribution of the flexural strength of the NAP ceramic is more concentrated than that of the MAP ceramic. Therefore, the NAP ceramics will be more stable and reliable in real applications.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
The effect of tailor-made additives on crystal growth of methyl paraben: Experiments and modelling
Cai, Zhihui; Liu, Yong; Song, Yang; Guan, Guoqiang; Jiang, Yanbin
2017-03-01
In this study, methyl paraben (MP) was selected as the model component, and acetaminophen (APAP), p-methyl acetanilide (PMAA) and acetanilide (ACET), which share the similar molecular structure as MP, were selected as the three tailor-made additives to study the effect of tailor-made additives on the crystal growth of MP. HPLC results indicated that the MP crystals induced by the three additives contained MP only. Photographs of the single crystals prepared indicated that the morphology of the MP crystals was greatly changed by the additives, but PXRD and single crystal diffraction results illustrated that the MP crystals were the same polymorph only with different crystal habits, and no new crystal form was found compared with other references. To investigate the effect of the additives on the crystal growth, the interaction between additives and facets was discussed in detail using the DFT methods and MD simulations. The results showed that APAP, PMAA and ACET would be selectively adsorbed on the growth surfaces of the crystal facets, which induced the change in MP crystal habits.
Test of the Additivity Principle for Current Fluctuations in a Model of Heat Conduction
Hurtado, Pablo I.; Garrido, Pedro L.
2009-06-01
The additivity principle allows to compute the current distribution in many one-dimensional (1D) nonequilibrium systems. Using simulations, we confirm this conjecture in the 1D Kipnis-Marchioro-Presutti model of heat conduction for a wide current interval. The current distribution shows both Gaussian and non-Gaussian regimes, and obeys the Gallavotti-Cohen fluctuation theorem. We verify the existence of a well-defined temperature profile associated to a given current fluctuation. This profile is independent of the sign of the current, and this symmetry extends to higher-order profiles and spatial correlations. We also show that finite-time joint fluctuations of the current and the profile are described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.
Directory of Open Access Journals (Sweden)
Thomas Ingersoll
Full Text Available Integrated Discrete Multiple Organ Co-culture (IDMOC is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies.
Use of additive technologies for practical working with complex models for foundry technologies
Olkhovik, E.; Butsanets, A. A.; Ageeva, A. A.
2016-07-01
The article presents the results of research of additive technology (3D printing) application for developing a geometrically complex model of castings parts. Investment casting is well known and widely used technology for the production of complex parts. The work proposes the use of a 3D printing technology for manufacturing models parts, which are removed by thermal destruction. Traditional methods of equipment production for investment casting involve the use of manual labor which has problems with dimensional accuracy, and CNC technology which is less used. Such scheme is low productive and demands considerable time. We have offered an alternative method which consists in printing the main knots using a 3D printer (PLA and ABS) with a subsequent production of castings models from them. In this article, the main technological methods are considered and their problems are discussed. The dimensional accuracy of models in comparison with investment casting technology is considered as the main aspect.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Modeling the Use of Sulfate Additives for Potassium Chloride Destruction in Biomass Combustion
DEFF Research Database (Denmark)
Wu, Hao; Pedersen, Morten Nedergaard; Jespersen, Jacob Boll;
2014-01-01
Potassium chloride, KCl, formed from biomass combustion may lead to ash deposition and corrosion problems in boilers. Sulfates are effective additives for converting KCl to the less harmful K2SO4 and HCl. In the present study, the rate constants for decomposition of ammonium sulfate and aluminum...... sulfate were obtained from experiments in a fast heating rate thermogravimetric analyzer. The yields of SO2 and SO3 from the decomposition were investigated in a tube reactor at 600–900 °C, revealing a constant distribution of about 15% SO2 and 85% SO3 from aluminum sulfate decomposition and a temperature......-dependent distribution of SO2 and SO3 from ammonium sulfate decomposition. On the basis of these data as well as earlier results, a detailed chemical kinetic model for sulfation of KCl by a range of sulfate additives was established. Modeling results were compared to biomass combustion experiments in a bubbling...
Choosing components in the additive main effect and multiplicative interaction (AMMI models
Directory of Open Access Journals (Sweden)
Dias Carlos Tadeu dos Santos
2006-01-01
Full Text Available The additive main effect and multiplicative interaction (AMMI models allows analysts to detect interactions between rows and columns in a two-way table. However, there are many methods proposed in the literature to determine the number of multiplicative components to include in the AMMI model. These methods typically give different results for any particular data set, so the user needs some guidance as to which methods to use. In this paper we compare four commonly used methods using simulated data based on real experiments, and provide some general recommendations.
Directory of Open Access Journals (Sweden)
Hai Liu
2010-10-01
Full Text Available Zero-inflation problem is very common in ecological studies as well as other areas. Nonparametric regression with zero-inflated data may be studied via the zero-inflated generalized additive model (ZIGAM, which assumes that the zero-inflated responses come from a probabilistic mixture of zero and a regular component whose distribution belongs to the 1-parameter exponential family. With the further assumption that the probability of non-zero-inflation is some monotonic function of the mean of the regular component, we propose the constrained zero-inflated generalized additive model (COZIGAM for analyzingzero-inflated data. When the hypothesized constraint obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We have developed an R package COZIGAM which contains functions that implement an iterative algorithm for fitting ZIGAMs and COZIGAMs to zero-inflated data basedon the penalized likelihood approach. Other functions included in the package are useful for model prediction and model selection. We demonstrate the use of the COZIGAM package via some simulation studies and a real application.
Samadhi, TMAA; Sumihartati, Atin
2016-02-01
The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..
Medero, Rafael; García-Rodríguez, Sylvana; François, Christopher J; Roldán-Alzate, Alejandro
2017-03-21
Non-invasive hemodynamic assessment of total cavopulmonary connection (TCPC) is challenging due to the complex anatomy. Additive manufacturing (AM) is a suitable alternative for creating patient-specific in vitro models for flow measurements using four-dimensional (4D) Flow MRI. These in vitro systems have the potential to serve as validation for computational fluid dynamics (CFD), simulating different physiological conditions. This study investigated three different AM technologies, stereolithography (SLA), selective laser sintering (SLS) and fused deposition modeling (FDM), to determine differences in hemodynamics when measuring flow using 4D Flow MRI. The models were created using patient-specific MRI data from an extracardiac TCPC. These models were connected to a perfusion pump circulating water at three different flow rates. Data was processed for visualization and quantification of velocity, flow distribution, vorticity and kinetic energy. These results were compared between each model. In addition, the flow distribution obtained in vitro was compared to in vivo. The results showed significant difference in velocities measured at the outlets of the models that required internal support material when printing. Furthermore, an ultrasound flow sensor was used to validate flow measurements at the inlets and outlets of the in vitro models. These results were highly correlated to those measured with 4D Flow MRI. This study showed that commercially available AM technologies can be used to create patient-specific vascular models for in vitro hemodynamic studies at reasonable costs. However, technologies that do not require internal supports during manufacturing allow smoother internal surfaces, which makes them better suited for flow analyses.
Mathematical modeling of polyphenolic additive made from grape stone with for meet products
Directory of Open Access Journals (Sweden)
Інна Олександрівна Літвінова
2015-07-01
Full Text Available In the article the optimal parameters are defined to obtain polyphenol additive made from grape stone of antioxidant purpose – "Maltovyn" by the method of mathematical planning of multifactor experiments. Research is conducted under the matrix of D-quadratic optimal plan of experiments. The results of microwave extraction process of phenolic compounds with maximum antioxidant activity are obtained. It was established that the selected model provides a set of detection values that minimize divergence of calculated and experimental data
Rain water transport and storage in a model sandy soil with hydrogel particle additives.
Wei, Y; Durian, D J
2014-10-01
We study rain water infiltration and drainage in a dry model sandy soil with superabsorbent hydrogel particle additives by measuring the mass of retained water for non-ponding rainfall using a self-built 3D laboratory set-up. In the pure model sandy soil, the retained water curve measurements indicate that instead of a stable horizontal wetting front that grows downward uniformly, a narrow fingered flow forms under the top layer of water-saturated soil. This rain water channelization phenomenon not only further reduces the available rain water in the plant root zone, but also affects the efficiency of soil additives, such as superabsorbent hydrogel particles. Our studies show that the shape of the retained water curve for a soil packing with hydrogel particle additives strongly depends on the location and the concentration of the hydrogel particles in the model sandy soil. By carefully choosing the particle size and distribution methods, we may use the swollen hydrogel particles to modify the soil pore structure, to clog or extend the water channels in sandy soils, or to build water reservoirs in the plant root zone.
Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore
2016-04-01
Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using
Exploration of land-use scenarios for flood hazard modeling – the case of Santiago de Chile
Directory of Open Access Journals (Sweden)
A. Müller
2011-04-01
Full Text Available Urban expansion leads to modifications in land use and land cover and to the loss of vegetated areas. These developments are in some regions of the world accelerated by a changing regional climate. As a consequence, major changes in the amount of green spaces can be observed in many urban regions. Amongst other dependences the amount of green spaces determines the availability of retention areas in a watershed. The goal of this research is to develop possible land-use and land-cover scenarios for a watershed and to explore the influence of land-use and land-cover changes on its runoff behavior using the distributed hydrological model HEC-HMS. The study area for this research is a small peri-urban watershed in the eastern area of Santiago de Chile.
Three spatially explicit exploratory land-use/land-cover scenario alternatives were developed based on the analysis of previous land-use developments using high resolution satellite data, on the analysis of urban planning laws, on the analysis of climate change predictions, and on expert interviews. Modeling the resulting changes in runoff allows making predictions about the changes in flood hazard which the adjacent urban areas are facing after heavy winter precipitation events. The paper shows how HEC-HMS was used applying a distributed event modeling approach. The derived runoff values are combined with existing flood hazard maps and can be regarded as important source of information for the adaptation to changing conditions in the study area. The most significant finding is that the land-use changes that have to be expected after long drought periods pose the highest risk with respect to floods.
Evaluation of seismic hazard at the northwestern part of Egypt
Ezzelarab, M.; Shokry, M. M. F.; Mohamed, A. M. E.; Helal, A. M. A.; Mohamed, Abuoelela A.; El-Hadidy, M. S.
2016-01-01
The objective of this study is to evaluate the seismic hazard at the northwestern Egypt using the probabilistic seismic hazard assessment approach. The Probabilistic approach was carried out based on a recent data set to take into account the historic seismicity and updated instrumental seismicity. A homogenous earthquake catalogue was compiled and a proposed seismic sources model was presented. The doubly-truncated exponential model was adopted for calculations of the recurrence parameters. Ground-motion prediction equations that recently recommended by experts and developed based upon earthquake data obtained from tectonic environments similar to those in and around the studied area were weighted and used for assessment of seismic hazard in the frame of logic tree approach. Considering a grid of 0.2° × 0.2° covering the study area, seismic hazard curves for every node were calculated. Hazard maps at bedrock conditions were produced for peak ground acceleration, in addition to six spectral periods (0.1, 0.2, 0.3, 1.0, 2.0 and 3.0 s) for return periods of 72, 475 and 2475 years. The unified hazard spectra of two selected rock sites at Alexandria and Mersa Matruh Cities were provided. Finally, the hazard curves were de-aggregated to determine the sources that contribute most of hazard level of 10% probability of exceedance in 50 years for the mentioned selected sites.
Centers for Disease Control (CDC) Podcasts
2007-04-10
Chemicals are a part of our daily lives, providing many products and modern conveniences. With more than three decades of experience, The Centers for Disease Control and Prevention (CDC) has been in the forefront of efforts to protect and assess people's exposure to environmental and hazardous chemicals. This report provides information about hazardous chemicals and useful tips on how to protect you and your family from harmful exposure. Created: 4/10/2007 by CDC National Center for Environmental Health. Date Released: 4/13/2007.
Directory of Open Access Journals (Sweden)
S. K. Allen
2009-03-01
Full Text Available Flood and mass movements originating from glacial environments are particularly devastating in populated mountain regions of the world, but in the remote Mount Cook region of New Zealand's Southern Alps minimal attention has been given to these processes. Glacial environments are characterized by high mass turnover and combined with changing climatic conditions, potential problems and process interactions can evolve rapidly. Remote sensing based terrain mapping, geographic information systems and flow path modelling are integrated here to explore the extent of ice avalanche, debris flow and lake flood hazard potential in the Mount Cook region. Numerous proglacial lakes have formed during recent decades, but well vegetated, low gradient outlet areas suggest catastrophic dam failure and flooding is unlikely. However, potential impacts from incoming mass movements of ice, debris or rock could lead to dam overtopping, particularly where lakes are forming directly beneath steep slopes. Physically based numerical modeling with RAMMS was introduced for local scale analyses of rock avalanche events, and was shown to be a useful tool for establishing accurate flow path dynamics and estimating potential event magnitudes. Potential debris flows originating from steep moraine and talus slopes can reach road and built infrastructure when worst-case runout distances are considered, while potential effects from ice avalanches are limited to walking tracks and alpine huts located in close proximity to initiation zones of steep ice. Further local scale studies of these processes are required, leading towards a full hazard assessment, and changing glacial conditions over coming decades will necessitate ongoing monitoring and reassessment of initiation zones and potential impacts.
Directory of Open Access Journals (Sweden)
Andrzej Augustynek
2007-01-01
Full Text Available In June 2006, over 12.6 million Polish users of the Web registered. On the average, each of them spent 21 hours and 37 minutes monthly browsing the Web. That is why the problems of the psychological aspects of computer utilization have become an urgent research subject. The results of research into the development of Polish information society carried out in AGH University of Science and Technology, under the leadership of Leslaw H. Haber, in the period from 2000 until present time, indicate the emergence dynamic changes in the ways of computer utilization and their circumstances. One of the interesting regularities has been the inverse proportional relation between the level of computer skills and the frequency of the Web utilization.It has been found that in 2005, compared to 2000, the following changes occurred:- A significant drop in the number of students who never used computers and the Web;- Remarkable increase in computer knowledge and skills (particularly pronounced in the case of first years student- Decreasing gap in computer skills between students of the first and the third year; between male and female students;- Declining popularity of computer games.It has been demonstrated also that the hazard of computer screen addiction was the highest in he case of unemployed youth outside school system. As much as 12% of this group of young people were addicted to computer. A lot of leisure time that these youths enjoyed inducted them to excessive utilization of the Web. Polish housewives are another population group in risk of addiction to the Web. The duration of long Web charts carried out by younger and younger youths has been another matter of concern. Since the phenomenon of computer addiction is relatively new, no specific therapy methods has been developed. In general, the applied therapy in relation to computer addition syndrome is similar to the techniques applied in the cases of alcohol or gambling addiction. Individual and group
Seo, Seulgi; Ka, Mi-Hyun; Lee, Kwang-Geun
2014-07-09
The effect of various food additives on the formation of carcinogenic 4(5)-methylimidazole (4-MI) in a caramel model system was investigated. The relationship between the levels of 4-MI and various pyrazines was studied. When glucose and ammonium hydroxide were heated, the amount of 4-MI was 556 ± 1.3 μg/mL, which increased to 583 ± 2.6 μg/mL by the addition of 0.1 M of sodium sulfite. When various food additives, such as 0.1 M of iron sulfate, magnesium sulfate, zinc sulfate, tryptophan, and cysteine were added, the amount of 4-MI was reduced to 110 ± 0.7, 483 ± 2.0, 460 ± 2.0, 409 ± 4.4, and 397 ± 1.7 μg/mL, respectively. The greatest reduction, 80%, occurred with the addition of iron sulfate. Among the 12 pyrazines, 2-ethyl-6-methylpyrazine with 4-MI showed the highest correlation (r = -0.8239).
Energy Technology Data Exchange (ETDEWEB)
Baruffi, F. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cisotto, A., E-mail: segreteria@adbve.it [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Cimolino, A.; Ferri, M.; Monego, M.; Norbiato, D.; Cappelletto, M.; Bisaglia, M. [Autorita di Bacino dei Fiumi dell' Alto Adriatico, Cannaregio 4314, 30121 Venice (Italy); Pretner, A.; Galli, A. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Scarinci, A., E-mail: andrea.scarinci@sgi-spa.it [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Marsala, V.; Panelli, C. [SGI Studio Galli Ingegneria, via della Provvidenza 13, 35030 Sarmeola di Rubano (PD) (Italy); Gualdi, S., E-mail: silvio.gualdi@bo.ingv.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Bucchignani, E., E-mail: e.bucchignani@cira.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Torresan, S., E-mail: torresan@cmcc.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Pasini, S., E-mail: sara.pasini@stud.unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); Critto, A., E-mail: critto@unive.it [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), via Augusto Imperatore 16, 73100 Lecce (Italy); Department of Environmental Sciences, Informatics and Statistics, University Ca' Foscari Venice, Calle Larga S. Marta 2137, 30123 Venice (Italy); and others
2012-12-01
Climate change impacts on water resources, particularly groundwater, is a highly debated topic worldwide, triggering international attention and interest from both researchers and policy makers due to its relevant link with European water policy directives (e.g. 2000/60/EC and 2007/118/EC) and related environmental objectives. The understanding of long-term impacts of climate variability and change is therefore a key challenge in order to address effective protection measures and to implement sustainable management of water resources. This paper presents the modeling approach adopted within the Life + project TRUST (Tool for Regional-scale assessment of groUndwater Storage improvement in adaptation to climaTe change) in order to provide climate change hazard scenarios for the shallow groundwater of high Veneto and Friuli Plain, Northern Italy. Given the aim to evaluate potential impacts on water quantity and quality (e.g. groundwater level variation, decrease of water availability for irrigation, variations of nitrate infiltration processes), the modeling approach integrated an ensemble of climate, hydrologic and hydrogeologic models running from the global to the regional scale. Global and regional climate models and downscaling techniques were used to make climate simulations for the reference period 1961-1990 and the projection period 2010-2100. The simulation of the recent climate was performed using observed radiative forcings, whereas the projections have been done prescribing the radiative forcings according to the IPCC A1B emission scenario. The climate simulations and the downscaling, then, provided the precipitation, temperatures and evapo-transpiration fields used for the impact analysis. Based on downscaled climate projections, 3 reference scenarios for the period 2071-2100 (i.e. the driest, the wettest and the mild year) were selected and used to run a regional geomorphoclimatic and hydrogeological model. The final output of the model ensemble produced
Directory of Open Access Journals (Sweden)
Ali Zare
2011-10-01
Full Text Available Survival analysis is a set of methods used for analysis of the data which exist until the occurrence of an event. This study aimed to compare the results of the use of the semi-parametric Cox model with parametric models to determine the factors influencing the length of stay of patients in the inpatient units of Women Hospital in Tehran, Iran. In this historical cohort study all 3421 charts of the patients admitted to Obstetrics, Surgery and Oncology units in 2008 were reviewed and the required patient data such as medical insurance coverage types, admission months, days and times, inpatient units, final diagnoses, the number of diagnostic tests, admission types were collected. The patient length of stay in hospitals leading to recoverys was considered as a survival variable. To compare the semi-parametric Cox model and parametric (including exponential, Weibull, Gompertz, log-normal, log-logistic and gamma models and find the best model fitted to studied data, Akaike's Information Criterion (AIC and Cox-Snell residual were used. P<0.05 was considered as statistically significant. AIC and Cox-Snell residual graph showed that the gamma model had the lowest AIC (4288.598 and the closest graph to the bisector. The results of the gamma model showed that factors affecting the patient length of stay were admission day, inpatient unit, related physician specialty, emergent admission, final diagnosis and the number of laboratory tests, radiographies and sonographies (P<0.05. The results showed that the gamma model provided a better fit to the studied data than the Cox proportional hazards model. Therefore, it is better for researchers of healthcare field to consider this model in their researches about the patient length of stay (LOS if the assumption of proportional hazards is not fulfilled.
Utilization of sulfate additives in biomass combustion: fundamental and modeling aspects
DEFF Research Database (Denmark)
Wu, Hao; Jespersen, Jacob Boll; Grell, Morten Nedergaard;
2013-01-01
Sulfates, such as ammonium sulfate, aluminum sulfate and ferric sulfate, are effective additives for converting the alkali chlorides released from biomass combustion to the less harmful alkali sulfates. Optimization of the use of these additives requires knowledge on their decomposition rate...... and product distribution under high temperature conditions. In the present work, the decomposition of ammonium sulfate, aluminum sulfate and ferric sulfate was studied respectively in a fast-heating rate thermogravimetric analyzer for deriving a kinetic model to describe the process. The yields of SO2 and SO3...... of different sulfates indicated that ammonium sulfate has clearly strongest sulfation power towards KCl at temperatures below 800oC, whereas the sulfation power of ferric and aluminum sulfates exceeds clearly that of ammonium sulfate between 900 and 1000oC. However, feeding gaseous SO3 was found to be most...
Minimax-optimal rates for sparse additive models over kernel classes via convex programming
Raskutti, Garvesh; Yu, Bin
2010-01-01
Sparse additive models are families of $d$-variate functions that have the additive decomposition \\mbox{$f^* = \\sum_{j \\in S} f^*_j$,} where $S$ is a unknown subset of cardinality $s \\ll d$. We consider the case where each component function $f^*_j$ lies in a reproducing kernel Hilbert space, and analyze a simple kernel-based convex program for estimating the unknown function $f^*$. Working within a high-dimensional framework that allows both the dimension $d$ and sparsity $s$ to scale, we derive convergence rates in the $L^2(\\mathbb{P})$ and $L^2(\\mathbb{P}_n)$ norms. These rates consist of two terms: a \\emph{subset selection term} of the order $\\frac{s \\log d}{n}$, corresponding to the difficulty of finding the unknown $s$-sized subset, and an \\emph{estimation error} term of the order $s \\, \
Materials Testing and Cost Modeling for Composite Parts Through Additive Manufacturing
2016-04-30
presentations . [adedeji.badiru@afit.edu] Abstract Recent advances in additive manufacturing (3D printing) have introduced new parameters in reducing cost ...ability to measure energy consumption and production costs . However, since AM processes can simultaneously produce multiple items in a parallel...qÜáêíÉÉåíÜ=^ååì~ä= ^Åèìáëáíáçå=oÉëÉ~êÅÜ= póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Materials Testing and Cost Modeling for Composite Parts Through Additive
Topsoil organic carbon content of Europe, a new map based on a generalised additive model
de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas
2014-05-01
There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average
Dinitz, Laura B.
2008-01-01
With costs of natural disasters skyrocketing and populations increasingly settling in areas vulnerable to natural hazards, society is challenged to better allocate its limited risk-reduction resources. In 2000, Congress passed the Disaster Mitigation Act, amending the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Robert T. Stafford Disaster Relief and Emergency Assistance Act, Pub. L. 93-288, 1988; Federal Emergency Management Agency, 2002, 2008b; Disaster Mitigation Act, 2000), mandating that State, local, and tribal communities prepare natural-hazard mitigation plans to qualify for pre-disaster mitigation grants and post-disaster aid. The Federal Emergency Management Agency (FEMA) was assigned to coordinate and implement hazard-mitigation programs, and it published information about specific mitigation-plan requirements and the mechanisms (through the Hazard Mitigation Grant Program-HMGP) for distributing funds (Federal Emergency Management Agency, 2002). FEMA requires that each community develop a mitigation strategy outlining long-term goals to reduce natural-hazard vulnerability, mitigation objectives and specific actions to reduce the impacts of natural hazards, and an implementation plan for those actions. The implementation plan should explain methods for prioritizing, implementing, and administering the actions, along with a 'cost-benefit review' justifying the prioritization. FEMA, along with the National Institute of Building Sciences (NIBS), supported the development of HAZUS ('Hazards U.S.'), a geospatial natural-hazards loss-estimation tool, to help communities quantify potential losses and to aid in the selection and prioritization of mitigation actions. HAZUS was expanded to a multiple-hazard version, HAZUS-MH, that combines population, building, and natural-hazard science and economic data and models to estimate physical damages, replacement costs, and business interruption for specific natural-hazard scenarios. HAZUS
Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M
2009-10-01
Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.
AlRamadan, Abdullah S.
2015-10-01
The demand for fuels with high anti-knock quality has historically been rising, and will continue to increase with the development of downsized and turbocharged spark-ignition engines. Butanol isomers, such as 2-butanol and tert-butanol, have high octane ratings (RON of 105 and 107, respectively), and thus mixed butanols (68.8% by volume of 2-butanol and 31.2% by volume of tert-butanol) can be added to the conventional petroleum-derived gasoline fuels to improve octane performance. In the present work, the effect of mixed butanols addition to gasoline surrogates has been investigated in a high-pressure shock tube facility. The ignition delay times of mixed butanols stoichiometric mixtures were measured at 20 and 40bar over a temperature range of 800-1200K. Next, 10vol% and 20vol% of mixed butanols (MB) were blended with two different toluene/n-heptane/iso-octane (TPRF) fuel blends having octane ratings of RON 90/MON 81.7 and RON 84.6/MON 79.3. These MB/TPRF mixtures were investigated in the shock tube conditions similar to those mentioned above. A chemical kinetic model was developed to simulate the low- and high-temperature oxidation of mixed butanols and MB/TPRF blends. The proposed model is in good agreement with the experimental data with some deviations at low temperatures. The effect of mixed butanols addition to TPRFs is marginal when examining the ignition delay times at high temperatures. However, when extended to lower temperatures (T < 850K), the model shows that the mixed butanols addition to TPRFs causes the ignition delay times to increase and hence behaves like an octane booster at engine-like conditions. © 2015 The Combustion Institute.
An Empirical Research on the Model of the Right in Additional Allocation of Stocks
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
How to define the value of the Right in Additional Al location of Stocks (RAAS) acts an important role in stock markets whether or not the shareholders execute the right. Moreover, the valuation defining of RAAS an d the exercise price (K) are mutual cause and effect. Based on some literatures on this subject, this paper presents a model valuing the RAAS per-share. With t he opening information in ShenZheng Stock Markets, we make a simulation on the R AAS's value of shenwuye, which is a shareholding corp...
Micellar Effects on Nucleophilic Addition Reaction and Applicability of Enzyme Catalysis Model
Directory of Open Access Journals (Sweden)
R. K. London Singh
2012-01-01
Full Text Available This study describes the effect of anionic and cationic micelles on nucleophilic addition reaction of rosaniline hydrochloride (RH with hydroxide under pseudo-first order condition. Strong inhibitory effect is observed due to SDS micelle, whereas CTAB catalysed the reaction. This is explained on the basis of electrostatic and hydrophobic interactions which are simultaneously operating in the reaction system. The kinetic data obtained is quantitatively analysed by applying the positive cooperativity model of enzyme catalysis. Binding constants and influence of counterions on the reaction have also been investigated.
Directory of Open Access Journals (Sweden)
Khosravi
2015-09-01
Full Text Available Background Transplantation is the only treatment for patients with liver failure. Since the therapy imposes high expenses to the patients and community, identification of effective factors on survival of such patients after transplantation is valuable. Objectives The current study attempted to model the survival of patients (two years old and above after liver transplantation using neural network and Cox Proportional Hazards (Cox PH regression models. The event is defined as death due to complications of liver transplantation. Patients and Methods In a historical cohort study, the clinical findings of 1168 patients who underwent liver transplant surgery (from March 2008 to march 2013 at Shiraz Namazee Hospital Organ Transplantation Center, Shiraz, Southern Iran, were used. To model the one to five years survival of such patients, Cox PH regression model accompanied by three layers feed forward artificial neural network (ANN method were applied on data separately and their prediction accuracy was compared using the area under the receiver operating characteristic curve (ROC. Furthermore, Kaplan-Meier method was used to estimate the survival probabilities in different years. Results The estimated survival probability of one to five years for the patients were 91%, 89%, 85%, 84%, and 83%, respectively. The areas under the ROC were 86.4% and 80.7% for ANN and Cox PH models, respectively. In addition, the accuracy of prediction rate for ANN and Cox PH methods was equally 92.73%. Conclusions The present study detected more accurate results for ANN method compared to those of Cox PH model to analyze the survival of patients with liver transplantation. Furthermore, the order of effective factors in patients’ survival after transplantation was clinically more acceptable. The large dataset with a few missing data was the advantage of this study, the fact which makes the results more reliable.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Phase-Field Modeling of Microstructure Evolution in Electron Beam Additive Manufacturing
Gong, Xibing; Chou, Kevin
2015-05-01
In this study, the microstructure evolution in the powder-bed electron beam additive manufacturing (EBAM) process is studied using phase-field modeling. In essence, EBAM involves a rapid solidification process and the properties of a build partly depend on the solidification behavior as well as the microstructure of the build material. Thus, the prediction of microstructure evolution in EBAM is of importance for its process optimization. Phase-field modeling was applied to study the microstructure evolution and solute concentration of the Ti-6Al-4V alloy in the EBAM process. The effect of undercooling was investigated through the simulations; the greater the undercooling, the faster the dendrite grows. The microstructure simulations show multiple columnar-grain growths, comparable with experimental results for the tested range.
Hybrid 2D-3D modelling of GTA welding with filler wire addition
Traidia, Abderrazak
2012-07-01
A hybrid 2D-3D model for the numerical simulation of Gas Tungsten Arc welding is proposed in this paper. It offers the possibility to predict the temperature field as well as the shape of the solidified weld joint for different operating parameters, with relatively good accuracy and reasonable computational cost. Also, an original approach to simulate the effect of immersing a cold filler wire in the weld pool is presented. The simulation results reveal two important observations. First, the weld pool depth is locally decreased in the presence of filler metal, which is due to the energy absorption by the cold feeding wire from the hot molten pool. In addition, the weld shape, maximum temperature and thermal cycles in the workpiece are relatively well predicted even when a 2D model for the arc plasma region is used. © 2012 Elsevier Ltd. All rights reserved.
Wheeler, Russell L.
2016-01-01
Probabilistic seismic‐hazard assessment (PSHA) requires an estimate of Mmax, the moment magnitude M of the largest earthquake that could occur within a specified area. Sparse seismicity hinders Mmax estimation in the central and eastern United States (CEUS) and tectonically similar regions worldwide (stable continental regions [SCRs]). A new global catalog of moderate‐to‐large SCR earthquakes is analyzed with minimal assumptions about enigmatic geologic controls on SCR Mmax. An earlier observation that SCR earthquakes of M 7.0 and larger occur in young (250–23 Ma) passive continental margins and associated rifts but not in cratons is not strongly supported by the new catalog. SCR earthquakes of M 7.5 and larger are slightly more numerous and reach slightly higher M in young passive margins and rifts than in cratons. However, overall histograms of M from young margins and rifts and from cratons are statistically indistinguishable. This conclusion is robust under uncertainties inM, the locations of SCR boundaries, and which of two available global SCR catalogs is used. The conclusion stems largely from recent findings that (1) large southeast Asian earthquakes once thought to be SCR were in actively deforming crust and (2) long escarpments in cratonic Australia were formed by prehistoric faulting. The 2014 seismic‐hazard model of the U.S. Geological Survey represents CEUS Mmax as four‐point probability distributions. The distributions have weighted averages of M 7.0 in cratons and M 7.4 in passive margins and rifts. These weighted averages are consistent with Mmax estimates of other SCR PSHAs of the CEUS, southeastern Canada, Australia, and India.
Voronov, Nikolai; Dikinis, Alexandr
2015-04-01
Modern technologies of remote sensing (RS) open wide opportunities for monitoring and increasing the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. The RS data do not supersede ground-based observations, but they allow to solve new problems in the area of hydrological and meteorological monitoring and forecasting. In particular, the data of satellite, aviation or radar observations may be used for increasing of special-temporal discreteness of hydrometeorological observations. Besides, what seems very promising is conjunctive use of the data of remote sensing, ground-based observations and the "output" of hydrodynamical weather models, which allows to increase significantly the accuracy and forecast-time interval of forecasts of hazardous hydrometeorological phenomena. Modern technologies of monitoring and forecasting of hazardous of hazardous hydrometeorological phenomena on the basis of conjunctive use of the data of satellite, aviation and ground-based observations, as well as the output data of hydrodynamical weather models are considered. It is noted that an important and promising method of monitoring is bioindication - surveillance over response of the biota to external influence and behavior of animals that are able to be presentient of convulsions of nature. Implement of the described approaches allows to reduce significantly both the damage caused by certain hazardous hydrological and meteorological phenomena and the general level of hydrometeorological vulnerability of certain different-purpose objects and the RF economy as a whole.
Directory of Open Access Journals (Sweden)
J. Bruce H. Shyu
2016-09-01
Full Text Available Taiwan is located at an active plate boundary and prone to earthquake hazards. To evaluate the island’s seismic risk, the Taiwan Earthquake Model (TEM project, supported by the Ministry of Sciences and Technology, evaluates earthquake hazard, risk, and related social and economic impact models for Taiwan through multidisciplinary collaboration. One of the major tasks of TEM is to construct a complete and updated seismogenic structure database for Taiwan to assess future seismic hazards. Toward this end, we have combined information from pre-existing databases and data obtained from new analyses to build an updated and digitized three-dimensional seismogenic structure map for Taiwan. Thirty-eight on-land active seismogenic structures are identified. For detailed information of individual structures such as their long-term slip rates and potential recurrence intervals, we collected data from existing publications, as well as calculated from results of our own field surveys and investigations. We hope this updated database would become a significant constraint for seismic hazard assessment calculations in Taiwan, and would provide important information for engineers and hazard mitigation agencies.
Guarana provides additional stimulation over caffeine alone in the planarian model.
Directory of Open Access Journals (Sweden)
Dimitrios Moustakas
Full Text Available The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.
Fan, Jianqing; Feng, Yang; Song, Rui
2011-06-01
A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.
Quantifying spatial disparities in neonatal mortality using a structured additive regression model.
Directory of Open Access Journals (Sweden)
Lawrence N Kazembe
Full Text Available BACKGROUND: Neonatal mortality contributes a large proportion towards early childhood mortality in developing countries, with considerable geographical variation at small areas within countries. METHODS: A geo-additive logistic regression model is proposed for quantifying small-scale geographical variation in neonatal mortality, and to estimate risk factors of neonatal mortality. Random effects are introduced to capture spatial correlation and heterogeneity. The spatial correlation can be modelled using the Markov random fields (MRF when data is aggregated, while the two dimensional P-splines apply when exact locations are available, whereas the unstructured spatial effects are assigned an independent Gaussian prior. Socio-economic and bio-demographic factors which may affect the risk of neonatal mortality are simultaneously estimated as fixed effects and as nonlinear effects for continuous covariates. The smooth effects of continuous covariates are modelled by second-order random walk priors. Modelling and inference use the empirical Bayesian approach via penalized likelihood technique. The methodology is applied to analyse the likelihood of neonatal deaths, using data from the 2000 Malawi demographic and health survey. The spatial effects are quantified through MRF and two dimensional P-splines priors. RESULTS: Findings indicate that both fixed and spatial effects are associated with neonatal mortality. CONCLUSIONS: Our study, therefore, suggests that the challenge to reduce neonatal mortality goes beyond addressing individual factors, but also require to understanding unmeasured covariates for potential effective interventions.
Combining neuroprotectants in a model of retinal degeneration: no additive benefit.
Directory of Open Access Journals (Sweden)
Fabiana Di Marco
Full Text Available The central nervous system undergoing degeneration can be stabilized, and in some models can be restored to function, by neuroprotective treatments. Photobiomodulation (PBM and dietary saffron are distinctive as neuroprotectants in that they upregulate protective mechanisms, without causing measurable tissue damage. This study reports a first attempt to combine the actions of PBM and saffron. Our working hypothesis was that the actions of PBM and saffron in protecting retinal photoreceptors, in a rat light damage model, would be additive. Results confirmed the neuroprotective potential of each used separately, but gave no evidence that their effects are additive. Detailed analysis suggests that there is actually a negative interaction between PBM and saffron when given simultaneously, with a consequent reduction of the neuroprotection. Specific testing will be required to understand the mechanisms involved and to establish whether there is clinical potential in combining neuroprotectants, to improve the quality of life of people affected by retinal pathology, such as age-related macular degeneration, the major cause of blindness and visual impairment in older adults.
Chan, H S
2000-09-01
A well-established experimental criterion for two-state thermodynamic cooperativity in protein folding is that the van't Hoff enthalpy DeltaH(vH) around the transition midpoint is equal, or very nearly so, to the calorimetric enthalpy DeltaH(cal) of the entire transition. This condition is satisfied by many small proteins. We use simple lattice models to provide a statistical mechanical framework to elucidate how this calorimetric two-state picture may be reconciled with the hierarchical multistate scenario emerging from recent hydrogen exchange experiments. We investigate the feasibility of using inverse Laplace transforms to recover the underlying density of states (i.e., enthalpy distribution) from calorimetric data. We find that the constraint imposed by DeltaH(vH)/DeltaH(cal) approximately 1 on densities of states of proteins is often more stringent than other "two-state" criteria proposed in recent theoretical studies. In conjunction with reasonable assumptions, the calorimetric two-state condition implies a narrow distribution of denatured-state enthalpies relative to the overall enthalpy difference between the native and the denatured conformations. This requirement does not always correlate with simple definitions of "sharpness" of a transition and has important ramifications for theoretical modeling. We find that protein models that assume capillarity cooperativity can exhibit overall calorimetric two-state-like behaviors. However, common heteropolymer models based on additive hydrophobic-like interactions, including highly specific two-dimensional Gō models, fail to produce proteinlike DeltaH(vH)/DeltaH(cal) approximately 1. A simple model is constructed to illustrate a proposed scenario in which physically plausible local and nonlocal cooperative terms, which mimic helical cooperativity and environment-dependent hydrogen bonding strength, can lead to thermodynamic behaviors closer to experiment. Our results suggest that proteinlike thermodynamic
Institute of Scientific and Technical Information of China (English)
郑蓉建; 周林成; 潘丰
2012-01-01
Fault monitoring of bioprocess is important to ensure safety of a reactor and maintain high quality of products. It is difficult to build an accurate mechanistic model for a bioprocess, so fault monitoring based on rich historical or online database is an effective way. A group of data based on bootstrap method could be resampling stochastically, improving generalization capability of model. In this paper, online fault monitoring of generalized additive models (GAMs) combining with bootstrap is proposed for glutamate fermentation process. GAMs and bootstrap are first used to decide confidence interval based on the online and off-line normal sampled data from glutamate fermentation experiments. Then GAMs are used to online fault monitoring for time, dissolved oxygen, oxygen uptake rate, and carbon dioxide evolution rate. The method can provide accurate fault alarm online and is helpful to provide useful information for removing fault and abnormal phenomena in the fermentation.
Directory of Open Access Journals (Sweden)
IRNANDA AIKO FIFI DJUUNA
2010-07-01
Full Text Available Djuuna IAF, Abbott LK, Van Niel K (2010 Predicting infectivity of Arbuscular Mycorrhizal fungi from soil variables using Generalized Additive Models and Generalized Linear Models. Biodiversitas 11: 145-150. The objective of this study was to predict the infectivity of arbuscular mycorrhizal fungi (AM fungi, from field soil based on soil properties and land use history using generalized additive models (GAMs and generalized linear models (GLMs. A total of 291 soil samples from a farm in Western Australia near Wickepin were collected and used in this study. Nine soil properties, including elevation, pH, EC, total C, total N, P, K, microbial biomass carbon, and soil texture, and land use history of the farm were used as independent variables, while the percentage of root length colonized (%RLC was used as the dependent variable. GAMs parameterized for the percent of root length colonized suggested skewed quadratic responses to soil pH and microbial biomass carbon; cubic responses to elevation and soil K; and linear responses to soil P, EC and total C. The strength of the relationship between percent root length colonized by AM fungi and environmental variables showed that only elevation, total C and microbial biomass carbon had strong relationships. In general, GAMs and GLMs models confirmed the strong relationship between infectivity of AM fungi (assessed in a glasshouse bioassay for soil collected in summer prior to the first rain of the season and soil properties.
Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment
Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc
2013-04-01
Structural and mechanical analyses of rock mass are key components for rock slope stability assessment. The complementary use of photogrammetric techniques [Poropat, 2001] and coupled DFN-DEM models [Harthong et al., 2012] provides a methodology that can be applied to complex 3D configurations. DFN-DEM formulation [Scholtès & Donzé, 2012a,b] has been chosen for modeling since it can explicitly take into account the fracture sets. Analyses conducted in 3D can produce very complex and unintuitive failure mechanisms. Therefore, a modeling strategy must be established in order to identify the key features which control the stability. For this purpose, a realistic case is presented to show the overall methodology from the photogrammetry acquisition to the mechanical modeling. By combining Sirovision and YADE Open DEM [Kozicki & Donzé, 2008, 2009], it can be shown that even for large camera to rock slope ranges (tested about one kilometer), the accuracy of the data are sufficient to assess the role of the structures on the stability of a jointed rock slope. In this case, on site stereo pairs of 2D images were taken to create 3D surface models. Then, digital identification of structural features on the unstable block zone was processed with Sirojoint software [Sirovision, 2010]. After acquiring the numerical topography, the 3D digitalized and meshed surface was imported into the YADE Open DEM platform to define the studied rock mass as a closed (manifold) volume to define the bounding volume for numerical modeling. The discontinuities were then imported as meshed planar elliptic surfaces into the model. The model was then submitted to gravity loading. During this step, high values of cohesion were assigned to the discontinuities in order to avoid failure or block displacements triggered by inertial effects. To assess the respective role of the pre-existing discontinuities in the block stability, different configurations have been tested as well as different degree of
Optimal Design in and Hazards of Linearization of Langmuir's Nonlinear Model.
Harrison, Ferrin; Katti, S. K.
Langmuir's model is studied for the situation where epsilon is independently and identically normally distributed. The "Y/x" versus "Y" plot had a 90% mid-range that did not contain the true curve in a vast portion of the range of "x". The "1/Y" versus "1/chi" plot had undefined expected values,…
2010-05-26
... of Nuclear Reactor Regulation, U.S. Nuclear Regulatory Commission, Washington, DC, 20555-0001... Processes Branch, Division of Policy and Rulemaking, Office of Nuclear Reactor Regulation. Revised Model... with the confidence in the ability of the fission product barriers (i.e., fuel cladding,...
Terpstra, Teun; Lindell, Michael K.
2013-01-01
Although research indicates that adoption of flood preparations among Europeans is low, only a few studies have attempted to explain citizens' preparedness behavior. This article applies the Protective Action Decision Model (PADM) to explain flood preparedness intentions in the Netherlands. Survey data ("N" = 1,115) showed that…
Models for recurrent gas release event behavior in hazardous waste tanks
Energy Technology Data Exchange (ETDEWEB)
Anderson, D.N. [Pacific Northwest Lab., Richland, WA (United States); Arnold, B.C. [California Univ., Riverside, CA (United States). Dept. of Statistics
1994-08-01
Certain radioactive waste storage tanks at the United States Department of Energy Hanford facilities continuously generate gases as a result of radiolysis and chemical reactions. The congealed sludge in these tanks traps the gases and causes the level of the waste within the tanks to rise. The waste level continues to rise until the sludge becomes buoyant and ``rolls over``, changing places with heavier fluid on top. During a rollover, the trapped gases are released, resulting, in a sudden drop in the waste level. This is known as a gas release event (GRE). After a GRE, the wastes leading to another GRE. We present nonlinear time waste re-congeals and gas again accumulates leading to another GRE. We present nonlinear time series models that produce simulated sample paths that closely resemble the temporal history of waste levels in these tanks. The models also imitate the random GRE, behavior observed in the temporal waste level history of a storage tank. We are interested in using the structure of these models to understand the probabilistic behavior of the random variable ``time between consecutive GRE`s``. Understanding the stochastic nature of this random variable is important because the hydrogen and nitrous oxide gases released from a GRE, are flammable and the ammonia that is released is a health risk. From a safety perspective, activity around such waste tanks should be halted when a GRE is imminent. With credible GRE models, we can establish time windows in which waste tank research and maintenance activities can be safely performed.
DEFF Research Database (Denmark)
Andersen, J.S.; Bedaux, J.J.M.; Kooijman, S.A.L.M.;
2000-01-01
This paper describes the influence of design characteristics on the statistical inference for an ecotoxicological hazard-based model using simulated survival data. The design characteristics of interest are the number and spacing of observations (counts) in time, the number and spacing of exposure...
Nonlinear feedback in a six-dimensional Lorenz Model: impact of an additional heating term
Directory of Open Access Journals (Sweden)
B.-W. Shen
2015-03-01
Full Text Available In this study, a six-dimensional Lorenz model (6DLM is derived, based on a recent study using a five-dimensional (5-D Lorenz model (LM, in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the steamfunction is referred to as a secondary streamfunction mode, while the two additional modes, that appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74, but slightly smaller than the one in the 5DLM (rc ~ 42.9. A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1 negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2 the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3 overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization, consistent with the following statement by Lorenz (1972: If the flap of a butterfly's wings can be instrumental in generating a tornado, it
Lin, Lei; Wang, Qian; Sadek, Adel W
2016-06-01
The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a "time-to-an-event" is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a "time-to-event" modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean
Directory of Open Access Journals (Sweden)
Anand Joshi
2013-01-01
Full Text Available This paper presents use of semiempirical method for seismic hazard zonation. The seismotectonically important region of Uttarakhand Himalaya has been considered in this work. Ruptures along the lineaments in the area identified from tectonic map are modeled deterministically using semi empirical approach given by Midorikawa (1993. This approach makes use of attenuation relation of peak ground acceleration for simulating strong ground motion at any site. Strong motion data collected over a span of three years in this region have been used to develop attenuation relation of peak ground acceleration of limited magnitude and distance applicability. The developed attenuation relation is used in the semi empirical method to predict peak ground acceleration from the modeled rupture planes in the area. A set of values of peak ground acceleration from possible ruptures in the area at the point of investigation is further used to compute probability of exceedance of peak ground acceleration of values 100 and 200 gals. The prepared map shows that regions like Tehri, Chamoli, Almora, Srinagar, Devprayag, Bageshwar, and Pauri fall in a zone of 10% probability of exceedence of peak ground acceleration of value 200 gals.
Uniting Mandelbrot’s Noah and Joseph Effects in Toy Models of Natural Hazard Time Series
Credgington, D.; Watkins, N. W.; Chapman, S. C.; Rosenberg, S. J.; Sanchez, R.
2009-12-01
The forecasting of extreme events is a highly topical, cross-disciplinary problem. One aspect which is potentially tractable even when the events themselves are stochastic is the probability of a “burst” of a given size and duration, defined as the area between a time series and a constant threshold. Many natural time series depart from the simplest, Brownian, case and in the 1960s Mandelbrot developed the use of fractals to describe these departures. In particular he proposed two kinds of fractal model to capture the way in which natural data is often persistent in time (his “Joseph effect”, common in hydrology and exemplified by fractional Brownian motion) and/or prone to heavy tailed jumps (the “Noah effect”, typical of economic index time series, for which he gave Levy flights as an examplar). Much of the earlier modelling, however, has emphasised one of the Noah and Joseph parameters (the tail exponent mu and one derived from the temporal behaviour such as power spectral beta) at the other one's expense. I will describe work [1] in which we applied a simple self-affine stable model-linear fractional stable motion (LFSM)-which unifies both effects to better describe natural data, in this case from space physics. I will show how we have resolved some contradictions seen in earlier work, where purely Joseph or Noah descriptions had been sought. I will also show recent work [2] using numerical simulations of LFSM and simple analytic scaling arguments to study the problem of the area between a fractional Levy model time series and a threshold. [1] Watkins et al, Space Science Reviews [2005] [2] Watkins et al, Physical Review E [2009
Ghazavi, Reza; Moafi Rabori, Ali; Ahadnejad Reveshty, Mohsen
2016-01-01
Estimate design storm based on rainfall intensity–duration–frequency (IDF) curves is an important parameter for hydrologic planning of urban areas. The main aim of this study was to estimate rainfall intensities of Zanjan city watershed based on overall relationship of rainfall IDF curves and appropriate model of hourly rainfall estimation (Sherman method, Ghahreman and Abkhezr method). Hydrologic and hydraulic impacts of rainfall IDF curves change in flood properties was evaluated via Stormw...
DEFF Research Database (Denmark)
Hald, Tine
, preferably before accepting the mandate, a scoping exercise is recommended. The scoping exercise could include an assessment of the mandate, possible interpretations of the terms of reference, deadlines, the modelling approaches possible and the data requirements. To support this process, a model catalogue...
Harinath, Eranda; Mann, George K I
2008-06-01
This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system.
Directory of Open Access Journals (Sweden)
Wararit PANICHKITKOSOLKUL
2012-09-01
Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and for almost all situations.
Energy Technology Data Exchange (ETDEWEB)
Zucca, J J; Walter, W R; Rodgers, A J; Richards, P; Pasyanos, M E; Myers, S C; Lay, T; Harris, D; Antoun, T
2008-11-19
The last ten years have brought rapid growth in the development and use of three-dimensional (3D) seismic models of Earth structure at crustal, regional and global scales. In order to explore the potential for 3D seismic models to contribute to important societal applications, Lawrence Livermore National Laboratory (LLNL) hosted a 'Workshop on Multi-Resolution 3D Earth Models to Predict Key Observables in Seismic Monitoring and Related Fields' on June 6 and 7, 2007 in Berkeley, California. The workshop brought together academic, government and industry leaders in the research programs developing 3D seismic models and methods for the nuclear explosion monitoring and seismic ground motion hazard communities. The workshop was designed to assess the current state of work in 3D seismology and to discuss a path forward for determining if and how 3D Earth models and techniques can be used to achieve measurable increases in our capabilities for monitoring underground nuclear explosions and characterizing seismic ground motion hazards. This paper highlights some of the presentations, issues, and discussions at the workshop and proposes two specific paths by which to begin quantifying the potential contribution of progressively refined 3D seismic models in critical applied arenas. Seismic monitoring agencies are tasked with detection, location, and characterization of seismic activity in near real time. In the case of nuclear explosion monitoring or seismic hazard, decisions to further investigate a suspect event or to launch disaster relief efforts may rely heavily on real-time analysis and results. Because these are weighty decisions, monitoring agencies are regularly called upon to meticulously document and justify every aspect of their monitoring system. In order to meet this level of scrutiny and maintain operational robustness requirements, only mature technologies are considered for operational monitoring systems, and operational technology necessarily lags
Martínez-Rincón, Raúl O.; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G.
2017-01-01
Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis. PMID:28158248
Martínez-Rincón, Raúl O; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G
2017-01-01
Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis.
Energy Technology Data Exchange (ETDEWEB)
Fischer, L.; Deppert, W.R. [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany); Pfeifer, D. [Department of Hematology and Oncology, University Hospital Freiburg (Germany); Stanzel, S.; Weimer, M. [Department of Biostatistics, German Cancer Research Center, Heidelberg (Germany); Hanjalic-Beck, A.; Stein, A.; Straßer, M.; Zahradnik, H.P. [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany); Schaefer, W.R., E-mail: wolfgang.schaefer@uniklinik-freiburg.de [Department of Obstetrics and Gynecology, University Hospital Freiburg (Germany)
2012-05-01
Embryo implantation is a crucial step in human reproduction and depends on the timely development of a receptive endometrium. The human endometrium is unique among adult tissues due to its dynamic alterations during each menstrual cycle. It hosts the implantation process which is governed by progesterone, whereas 17β-estradiol regulates the preceding proliferation of the endometrium. The receptors for both steroids are targets for drugs and endocrine disrupting chemicals. Chemicals with unwanted antigestagenic actions are potentially hazardous to embryo implantation since many pharmaceutical antiprogestins adversely affect endometrial receptivity. This risk can be addressed by human tissue-specific in vitro assays. As working basis we compiled data on chemicals interacting with the PR. In our experimental work, we developed a flexible in vitro model based on human endometrial Ishikawa cells. Effects of antiprogestin compounds on pre-selected target genes were characterized by sigmoidal concentration–response curves obtained by RT-qPCR. The estrogen sulfotransferase (SULT1E1) was identified as the most responsive target gene by microarray analysis. The agonistic effect of progesterone on SULT1E1 mRNA was concentration-dependently antagonized by RU486 (mifepristone) and ZK137316 and, with lower potency, by 4-nonylphenol, bisphenol A and apigenin. The negative control methyl acetoacetate showed no effect. The effects of progesterone and RU486 were confirmed on the protein level by Western blotting. We demonstrated proof of principle that our Ishikawa model is suitable to study quantitatively effects of antiprogestin-like chemicals on endometrial target genes in comparison to pharmaceutical reference compounds. This test is useful for hazard identification and may contribute to reduce animal studies. -- Highlights: ► We compare progesterone receptor-mediated endometrial effects of chemicals and drugs. ► 4-Nonylphenol, bisphenol A and apigenin exert weak
Directory of Open Access Journals (Sweden)
Ali ZARE
2015-10-01
Full Text Available Background: Gastric cancer is the one of the most prevalent reason of cancer-related death in the world. Survival of patients after surgery involves identifying risk factors. There are various models to detect the effect of risk factors on patients’ survival. The present study aims at evaluating these models.Methods: Data from 330 gastric cancer patients diagnosed at the Iran cancer institute during 1995-99 and followed up the end of 2011 were analyzed. The survival status of these patients in 2011 was determined by reopening the files as well as phone calls and the effect of various factors such as demographic, clinical, treatment, and post-surgical on pa-tients’ survival was studied. To compare various models of survival, Akaike Information Criterion and Cox-Snell Re-siduals were used. STATA 11 was used for data analyses.Results: Based on Cox-Snell Residuals and Akaike Information Criterion, the exponential (AIC=969.14 and Gom-pertz (AIC=970.70 models were more efficient than other accelerated failure-time models. Results of Cox propor-tional hazard model as well as the analysis of accelerated failure-time models showed that variables such as age (at di-agnosis, marital status, relapse, number of supplementary treatments, disease stage, and type of surgery were among factors affecting survival (P<0.05.Conclusion: Although most cancer researchers tend to use proportional hazard model, accelerated failure-time mod-els in analogous conditions — as they do not require proportional hazards assumption and consider a parametric sta-tistical distribution for survival time— will be credible alternatives to proportional hazard model.
Horváth, Ferenc; Tóth, Tamás; Wórum, Géza; Koroknai, Balázs; Kádi, Zoltán; Kovács, Gábor; Balázs, Attila; Visnovitz, Ferenc
2015-04-01
The planned construction of two new units at the site of the Paks NPP requires a comprehensive site investigation including complete reassessment of the seismic hazard according to the Hungarian as well as international standards. Following the regulations of the Specific Safety Guide no. 9 (IAEA 2010), the approved Hungarian Geological Investigation Program (HGIP) includes integrated geological-geophysical studies at different scales. The regional study aims at to elaborate a new synthesis of all published data for the whole Pannonian basin. This task is nearly completed and the main outcomes have already been published (Horváth et al. 2015). The near regional study is in progress and addresses the construction of a new tectonic model for the circular area with 50 km radius around the NPP using a wealth of unpublished oil company seismic and borehole data. The site vicinity study has also been started with a core activity of 300 km² 3D seismic data acquisition, processing and interpretation assisted by a series of additional geophysical surveys, new drillings and geological mapping. This lecture will present a few important results of the near regional study, which sheds new light on the intricate tectonic evolution of the Mid-Hungarian Fault Zone (MHFZ), which is a strongly deformed belt between the Alcapa and Tisza-Dacia megatectonic units. The nuclear power plant is located at the margin of the Tisza unit near to the southern edge of the MHFZ. Reassessment of seismic hazard at the site of the NPP requires better understanding of the Miocene to Recent tectonic evolution of this region in the central part of the Pannonian basin. Early to Middle Miocene was a period of rifting with formation of 1 to 3 km deep half-grabens filled with terrestrial to marine deposits and large amount of rift-related volcanic material. Graben fill became strongly deformed as a consequence of juxtaposition of the two megatectonic units leading to strong compression and development of
Treatment of Uncertainties in Probabilistic Tsunami Hazard
Thio, H. K.
2012-12-01
Over the last few years, we have developed a framework for developing probabilistic tsunami inundation maps, which includes comprehensive quantification of earthquake recurrence as well as uncertainties, and applied it to the development of a tsunami hazard map of California. The various uncertainties in tsunami source and propagation models are an integral part of a comprehensive probabilistic tsunami hazard analysis (PTHA), and often drive the hazard at low probability levels (i.e. long return periods). There is no unique manner in which uncertainties are included in the analysis although in general, we distinguish between "natural" or aleatory variability, such as slip distribution and event magnitude, and uncertainties due to an incomplete understanding of the behavior of the earth, called epistemic uncertainties, such as scaling relations and rupture segmentation. Aleatory uncertainties are typically included through integration over distribution functions based on regression analyses, whereas epistemic uncertainties are included using logic trees. We will discuss how the different uncertainties were included in our recent probabilistic tsunami inundation maps for California, and their relative importance on the final results. Including these uncertainties in offshore exceedance waveheights is straightforward, but the problem becomes more complicated once the non-linearity of near-shore propagation and inundation are encountered. By using the probabilistic off-shore waveheights as input level for the inundation models, the uncertainties up to that point can be included in the final maps. PTHA provides a consistent analysis of tsunami hazard and will become an important tool in diverse areas such as coastal engineering and land use planning. The inclusive nature of the analysis, where few assumptions are made a-priori as to which sources are significant, means that a single analysis can provide a comprehensive view of the hazard and its dominant sources
Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts
Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan
2015-04-01
Building precise and up-to-date coastal DEMs is a prerequisite for accurate modeling and forecasting of hydrodynamic processes at local scale. Marine flooding, originating from tsunamis, storm surges or waves, is one of them. Some high resolution DEMs are being generated for multiple coast configurations (gulf, embayment, strait, estuary, harbor approaches, low-lying areas…) along French Atlantic and Channel coasts. This work is undertaken within the framework of the TANDEM project (Tsunamis in the Atlantic and the English ChaNnel: Definition of the Effects through numerical Modeling) (2014-2017). DEMs boundaries were defined considering the vicinity of French civil nuclear facilities, site effects considerations and potential tsunamigenic sources. Those were identified from available historical observations. Seamless integrated topographic and bathymetric coastal DEMs will be used by institutions taking part in the study to simulate expected wave height at regional and local scale on the French coasts, for a set of defined scenarii. The main tasks were (1) the development of a new capacity of production of DEM, (2) aiming at the release of high resolution and precision digital field models referred to vertical reference frameworks, that require (3) horizontal and vertical datum conversions (all source elevation data need to be transformed to a common datum), on the basis of (4) the building of (national and/or local) conversion grids of datum relationships based on known measurements. Challenges in coastal DEMs development deal with good practices throughout model development that can help minimizing uncertainties. This is particularly true as scattered elevation data with variable density, from multiple sources (national hydrographic services, state and local government agencies, research organizations and private engineering companies) and from many different types (paper fieldsheets to be digitized, single beam echo sounder, multibeam sonar, airborne laser
DEFF Research Database (Denmark)
Enzenhoefer, R.; Binning, Philip John; Nowak, W.
2015-01-01
Risk is often defined as the product of probability, vulnerability and value. Drinking water supply from groundwater abstraction is often at risk due to multiple hazardous land use activities in the well catchment. Each hazard might or might not introduce contaminants into the subsurface at any...... by a German drinking water catchment. As one may expect, the results depend strongly on the chosen stakeholder objectives, but they are equally sensitive to different approaches for risk aggregation across different hazards, contaminant types, and over time....
Dimas, Leon S; Buehler, Markus J
2014-07-07
Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.
Modelling of C2 addition route to the formation of C60
Khan, Sabih D
2016-01-01
To understand the phenomenon of fullerene growth during its synthesis, an attempt is made to model a minimum energy growth route using a semi-empirical quantum mechanics code. C2 addition leading to C60 was modelled and three main routes, i.e. cyclic ring growth, pentagon and fullerene road, were studied. The growth starts with linear chains and, at n = 10, ring structures begins to dominate. The rings continue to grow and, at some point n > 30, they transform into close-cage fullerenes and the growth is shown to progress by the fullerene road until C60 is formed. The computer simulations predict a transition from a C38 ring to fullerene. Other growth mechanisms could also occur in the energetic environment commonly encountered in fullerene synthesis, but our purpose was to identify a minimal energy route which is the most probable structure. Our results also indicate that, at n = 20, the corannulene structure is energetically more stable than the corresponding fullerene and graphene sheet, however a ring str...
Dong, Wenming; Wan, Jiamin
2014-06-17
Many aquifers contaminated by U(VI)-containing acidic plumes are composed predominantly of quartz-sand sediments. The F-Area of the Savannah River Site (SRS) in South Carolina (USA) is an example. To predict U(VI) mobility and natural attenuation, we conducted U(VI) adsorption experiments using the F-Area plume sediments and reference quartz, goethite, and kaolinite. The sediments are composed of ∼96% quartz-sand and 3-4% fine fractions of kaolinite and goethite. We developed a new humic acid adsorption method for determining the relative surface area abundances of goethite and kaolinite in the fine fractions. This method is expected to be applicable to many other binary mineral pairs, and allows successful application of the component additivity (CA) approach based surface complexation modeling (SCM) at the SRS F-Area and other similar aquifers. Our experimental results indicate that quartz has stronger U(VI) adsorption ability per unit surface area than goethite and kaolinite at pH ≤ 4.0. Our modeling results indicate that the binary (goethite/kaolinite) CA-SCM under-predicts U(VI) adsorption to the quartz-sand dominated sediments at pH ≤ 4.0. The new ternary (quartz/goethite/kaolinite) CA-SCM provides excellent predictions. The contributions of quartz-sand, kaolinite, and goethite to U(VI) adsorption and the potential influences of dissolved Al, Si, and Fe are also discussed.
Generalized additive models reveal the intrinsic complexity of wood formation dynamics.
Cuny, Henri E; Rathgeber, Cyrille B K; Kiessé, Tristan Senga; Hartmann, Felix P; Barbeito, Ignacio; Fournier, Meriem
2013-04-01
The intra-annual dynamics of wood formation, which involves the passage of newly produced cells through three successive differentiation phases (division, enlargement, and wall thickening) to reach the final functional mature state, has traditionally been described in conifers as three delayed bell-shaped curves followed by an S-shaped curve. Here the classical view represented by the 'Gompertz function (GF) approach' was challenged using two novel approaches based on parametric generalized linear models (GLMs) and 'data-driven' generalized additive models (GAMs). These three approaches (GFs, GLMs, and GAMs) were used to describe seasonal changes in cell numbers in each of the xylem differentiation phases and to calculate the timing of cell development in three conifer species [Picea abies (L.), Pinus sylvestris L., and Abies alba Mill.]. GAMs outperformed GFs and GLMs in describing intra-annual wood formation dynamics, showing two left-skewed bell-shaped curves for division and enlargement, and a right-skewed bimodal curve for thickening. Cell residence times progressively decreased through the season for enlargement, whilst increasing late but rapidly for thickening. These patterns match changes in cell anatomical features within a tree ring, which allows the separation of earlywood and latewood into two distinct cell populations. A novel statistical approach is presented which renews our understanding of xylogenesis, a dynamic biological process in which the rate of cell production interplays with cell residence times in each developmental phase to create complex seasonal patterns.
The 2007 Bengkulu earthquake, its rupture model and implications for seismic hazard
Indian Academy of Sciences (India)
A Ambikapathy; J K Catherine; V K Gahalaut; M Narsaiah; A Bansal; P Mahesh
2010-08-01
The 12 September 2007 great Bengkulu earthquake ( 8.4) occurred on the west coast of Sumatra about 130 km SW of Bengkulu. The earthquake was followed by two strong aftershocks of 7.9 and 7.0. We estimate coseismic offsets due to the mainshock, derived from near-field Global Positioning System (GPS) measurements from nine continuous SuGAr sites operated by the California Institute of Technology (Caltech) group. Using a forward modelling approach, we estimated slip distribution on the causative rupture of the 2007 Bengkulu earthquake and found two patches of large slip, one located north of the mainshock epicenter and the other, under the Pagai Islands. Both patches of large slip on the rupture occurred under the island belt and shallow water. Thus, despite its great magnitude, this earthquake did not generate a major tsunami. Further, we suggest that the occurrence of great earthquakes in the subduction zone on either side of the Siberut Island region, might have led to the increase in static stress in the region, where the last great earthquake occurred in 1797 and where there is evidence of strain accumulation.
GeoClaw-STRICHE: A coupled model for Sediment TRansport In Coastal Hazard Events
Tang, Hui
2016-01-01
GeoClaw-STRICHE is designed for simulating the physical impacts of tsunami as it relates to erosion, transport and deposition. GeoClaw-STRICHE comprises of three components: (1) nonlinear shallow water equations; (2) advection-diffusion equation; (3) an equation for morphology updating. Multiple grain sizes and sediment layers are added into GeoClaw-STRICHE to simulate grain-size distribution and add the capability to develop grain-size trends from bottom to the top of a simulated deposit as well as along the inundation. Unlike previous models based on empirical equations or sediment concentration gradient, the standard Van Leer method is applied to calculate sediment flux. We tested and verified GeoClaw-STRICHE with flume experiment by \\citet{johnson2016experimental} and data from the 2004 Indian Ocean tsunami in Kuala Meurisi as published in \\citet{JGRF:JGRF786}. The comparison with experimental data shows GeoClaw-STRICHE's capability to simulate sediment thickness and grain-size distribution in experimenta...
Directory of Open Access Journals (Sweden)
Haiyan Lang
2016-07-01
Conclusion: According to the syndrome differentiation criteria for disease-syndrome combined model of ITP, the APS-injected animal model of ITP replicated through the passive immune modeling method without additional conditions possesses the characteristics of disease-syndrome combined model. It provides an ideal tool for the development of traditional Chinese medicine pharmacology experiment.
F. Pretis; Hendry, D.F.
2013-01-01
We outline six important hazards that can be encountered in econometric modelling of time-series data, and apply that analysis to demonstrate errors in the empirical modelling of climate data in Beenstock et al. (2012). We show that the claim made in Beenstock et al. (2012) as to the different degrees of integrability of CO2 and temperature is incorrect. In particular, the level of integration is not constant and not intrinsic to the process. Further, we illustrate that the ...
Premaratne, Pavithra Dhanuka
Disruption and fragmentation of an asteroid using nuclear explosive devices (NEDs) is a highly complex yet a practical solution to mitigating the impact threat of asteroids with short warning time. A Hypervelocity Asteroid Intercept Vehicle (HAIV) concept, developed at the Asteroid Deflection Research Center (ADRC), consists of a primary vehicle that acts as kinetic impactor and a secondary vehicle that houses NEDs. The kinetic impactor (lead vehicle) strikes the asteroid creating a crater. The secondary vehicle will immediately enter the crater and detonate its nuclear payload creating a blast wave powerful enough to fragment the asteroid. The nuclear subsurface explosion modeling and hydrodynamic simulation has been a challenging research goal that paves the way an array of mission critical information. A mesh-free hydrodynamic simulation method, Smoothed Particle Hydrodynamics (SPH) was utilized to obtain both qualitative and quantitative solutions for explosion efficiency. Commercial fluid dynamics packages such as AUTODYN along with the in-house GPU accelerated SPH algorithms were used to validate and optimize high-energy explosion dynamics for a variety of test cases. Energy coupling from the NED to the target body was also examined to determine the effectiveness of nuclear subsurface explosions. Success of a disruption mission also depends on the survivability of the nuclear payload when the secondary vehicle approaches the newly formed crater at a velocity of 10 km/s or higher. The vehicle may come into contact with debris ejecting the crater which required the conceptual development of a Whipple shield. As the vehicle closes on the crater, its skin may also experience extreme temperatures due to heat radiated from the crater bottom. In order to address this thermal problem, a simple metallic thermal shield design was implemented utilizing a radiative heat transfer algorithm and nodal solutions obtained from hydrodynamic simulations.
Radianti, Jaziar; Granmo, Ole-Christoffer
2014-01-01
Allocating limited resources in an optimal manner when rescuing victims from a hazard is a complex and error prone task, because the involved hazards are typically evolving over time; stagnating, building up or diminishing. Typical error sources are: miscalculation of resource availability and the victims’ condition. Thus, there is a need for decision support when it comes to rapidly predicting where the human fatalities are likely to occur to ensure timely rescue. This paper proposes a proba...
Shie, Ruei-Hao; Chan, Chang-Chuan
2013-10-15
The air monitors used by most regulatory authorities are designed to track the daily emissions of conventional pollutants and are not well suited for measuring hazardous air pollutants that are released from accidents such as refinery fires. By applying a wide variety of air-monitoring systems, including on-line Fourier transform infrared spectroscopy, gas chromatography with a flame ionization detector, and off-line gas chromatography-mass spectrometry for measuring hazardous air pollutants during and after a fire at a petrochemical complex in central Taiwan on May 12, 2011, we were able to detect significantly higher levels of combustion-related gaseous and particulate pollutants, refinery-related hydrocarbons, and chlorinated hydrocarbons, such as 1,2-dichloroethane, vinyl chloride monomer, and dichloromethane, inside the complex and 10 km downwind from the fire than those measured during the normal operation periods. Both back trajectories and dispersion models further confirmed that high levels of hazardous air pollutants in the neighboring communities were carried by air mass flown from the 22 plants that were shut down by the fire. This study demonstrates that hazardous air pollutants from industrial accidents can successfully be identified and traced back to their emission sources by applying a timely and comprehensive air-monitoring campaign and back trajectory air flow models.
Earthquake Hazard and Risk in Alaska
Black Porto, N.; Nyst, M.
2014-12-01
Alaska is one of the most seismically active and tectonically diverse regions in the United States. To examine risk, we have updated the seismic hazard model in Alaska. The current RMS Alaska hazard model is based on the 2007 probabilistic seismic hazard maps for Alaska (Wesson et al., 2007; Boyd et al., 2007). The 2015 RMS model will update several key source parameters, including: extending the earthquake catalog, implementing a new set of crustal faults, updating the subduction zone geometry and reoccurrence rate. First, we extend the earthquake catalog to 2013; decluster the catalog, and compute new background rates. We then create a crustal fault model, based on the Alaska 2012 fault and fold database. This new model increased the number of crustal faults from ten in 2007, to 91 faults in the 2015 model. This includes the addition of: the western Denali, Cook Inlet folds near Anchorage, and thrust faults near Fairbanks. Previously the subduction zone was modeled at a uniform depth. In this update, we model the intraslab as a series of deep stepping events. We also use the best available data, such as Slab 1.0, to update the geometry of the subduction zone. The city of Anchorage represents 80% of the risk exposure in Alaska. In the 2007 model, the hazard in Alaska was dominated by the frequent rate of magnitude 7 to 8 events (Gutenberg-Richter distribution), and large magnitude 8+ events had a low reoccurrence rate (Characteristic) and therefore didn't contribute as highly to the overall risk. We will review these reoccurrence rates, and will present the results and impact to Anchorage. We will compare our hazard update to the 2007 USGS hazard map, and discuss the changes and drivers for these changes. Finally, we will examine the impact model changes have on Alaska earthquake risk. Consider risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the
Influence of the heterogeneous reaction HCl + HOCl on an ozone hole model with hydrocarbon additions
Elliott, Scott; Cicerone, Ralph J.; Turco, Richard P.; Drdla, Katja; Tabazadeh, Azadeh
1994-02-01
Injection of ethane or propane has been suggested as a means for reducing ozone loss within the Antarctic vortex because alkanes can convert active chlorine radicals into hydrochloric acid. In kinetic models of vortex chemistry including as heterogeneous processes only the hydrolysis and HCl reactions of ClONO2 and N2O5, parts per billion by volume levels of the light alkanes counteract ozone depletion by sequestering chlorine atoms. Introduction of the surface reaction of HCl with HOCl causes ethane to deepen baseline ozone holes and generally works to impede any mitigation by hydrocarbons. The increased depletion occurs because HCl + HOCl can be driven by HOx radicals released during organic oxidation. Following initial hydrogen abstraction by chlorine, alkane breakdown leads to a net hydrochloric acid activation as the remaining hydrogen atoms enter the photochemical system. Lowering the rate constant for reactions of organic peroxy radicals with ClO to 10-13 cm3 molecule-1 s-1 does not alter results, and the major conclusions are insensitive to the timing of the ethane additions. Ignoring the organic peroxy radical plus ClO reactions entirely restores remediation capabilities by allowing HOx removal independent of HCl. Remediation also returns if early evaporation of polar stratospheric clouds leaves hydrogen atoms trapped in aldehyde intermediates, but real ozone losses are small in such cases.
Eddy Current Tomography Based on a Finite Difference Forward Model with Additive Regularization
Trillon, A.; Girard, A.; Idier, J.; Goussard, Y.; Sirois, F.; Dubost, S.; Paul, N.
2010-02-01
Eddy current tomography is a nondestructive evaluation technique used for characterization of metal components. It is an inverse problem acknowledged as difficult to solve since it is both ill-posed and nonlinear. Our goal is to derive an inversion technique with improved tradeoff between quality of the results, computational requirements and ease of implementation. This is achieved by fully accounting for the nonlinear nature of the forward problem by means of a system of bilinear equations obtained through a finite difference modeling of the problem. The bilinear character of equations with respect to the electric field and the relative conductivity is taken advantage of through a simple contrast source inversion-like scheme. The ill-posedness is dealt with through the addition of regularization terms to the criterion, the form of which is determined according to computational constraints and the piecewise constant nature of the medium. Therefore an edge-preserving functional is selected. The performance of the resulting method is illustrated using 2D synthetic data examples.
Enhancement of colour stability of anthocyanins in model beverages by gum arabic addition.
Chung, Cheryl; Rojanasasithara, Thananunt; Mutilangi, William; McClements, David Julian
2016-06-15
This study investigated the potential of gum arabic to improve the stability of anthocyanins that are used in commercial beverages as natural colourants. The degradation of purple carrot anthocyanin in model beverage systems (pH 3.0) containing L-ascorbic acid proceeded with a first-order reaction rate during storage (40 °C for 5 days in light). The addition of gum arabic (0.05-5.0%) significantly enhanced the colour stability of anthocyanin, with the most stable systems observed at intermediate levels (1.5%). A further increase in concentration (>1.5%) reduced its efficacy due to a change in the conformation of the gum arabic molecules that hindered their exposure to the anthocyanins. Fluorescence quenching measurements showed that the anthocyanin could have interacted with the glycoprotein fractions of the gum arabic through hydrogen bonding, resulting in enhanced stability. Overall, this study provides valuable information about enhancing the stability of anthocyanins in beverage systems using natural ingredients.
Directory of Open Access Journals (Sweden)
George C. Efthimiou
2015-06-01
Full Text Available The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I, the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Seismic hazard assessment in Aswan, Egypt
Deif, A.; Hamed, H.; Ibrahim, H. A.; Abou Elenean, K.; El-Amin, E.
2011-12-01
The study of earthquake activity and seismic hazard assessment around Aswan is very important due to the proximity of the Aswan High Dam. The Aswan High Dam is based on hard Precambrian bedrock and is considered to be the most important project in Egypt from the social, agricultural and electrical energy production points of view. The seismotectonic settings around Aswan strongly suggest that medium to large earthquakes are possible, particularly along the Kalabsha, Seiyal and Khor El-Ramla faults. The seismic hazard for Aswan is calculated utilizing the probabilistic approach within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for eight ground motion spectral periods and for a return period of 475 years, which is deemed appropriate for structural design standards in the Egyptian building codes. The results were also displayed in terms of uniform hazard spectra for rock sites at the Aswan High Dam for return periods of 475 and 2475 years. In addition, the ground-motion levels are also deaggregated at the dam site, in order to provide insight into which events are the most important for hazard estimation. The peak ground acceleration ranges between 36 and 152 cm s-2 for return periods of 475 years (equivalent to 90% probability of non-exceedance in 50 years). Spectral hazard values clearly indicate that compared with countries of high seismic risk, the seismicity in the Aswan region can be described as low at most sites to moderate in the area between the Kalabsha and Seyial faults.
Bank Bailouts and Moral Hazard : Evidence from Germany
Dam, L.; Koetter, M.
2012-01-01
We use a structural econometric model to provide empirical evidence that safety nets in the banking industry lead to additional risk taking. To identify the moral hazard effect of bailout expectations on bank risk, we exploit the fact that regional political factors explain bank bailouts but not ban
Directory of Open Access Journals (Sweden)
Carstens, W. A.
2013-08-01
Full Text Available Physical asset management (PAM is of increasing concern for companies in industry today. A key performance area of PAM is asset care plans (ACPs, which consist of maintenance strategies such as usage based maintenance (UBM and condition based maintenance (CBM. Data obtained from the South African mining industry was modelled using a CBM prognostic model called the proportional hazards model (PHM. Results indicated that the developed model produced estimates that were reasonable representations of reality. These findings provide an exciting basis for the development of future Weibull PHMs that could result in huge maintenance cost savings and reduced failure occurrences.
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal...
Deckert, George
2010-01-01
This viewgraph presentation reviews The NASA Hazard Analysis process. The contents include: 1) Significant Incidents and Close Calls in Human Spaceflight; 2) Subsystem Safety Engineering Through the Project Life Cycle; 3) The Risk Informed Design Process; 4) Types of NASA Hazard Analysis; 5) Preliminary Hazard Analysis (PHA); 6) Hazard Analysis Process; 7) Identify Hazardous Conditions; 8) Consider All Interfaces; 9) Work a Preliminary Hazard List; 10) NASA Generic Hazards List; and 11) Final Thoughts
Energy Technology Data Exchange (ETDEWEB)
Morais, Keli Cristiane Correia; Ribeiro, Robert Luis Lara; Santos, Kassiana Ribeiro dos; Mariano, Andre Bellin [Mariano Center for Research and Development of Sustainable Energy (NPDEAS), Curitiba, PR (Brazil); Vargas, Jose Viriato Coelho [Departament of Mechanical Engineering, Federal University of Parana (UFPR) Curitiba, PR (Brazil)
2010-07-01
The Brazilian National Program for Bio fuel Production has been encouraging diversification of feedstock for biofuel production. One of the most promising alternatives is the use of microalgae biomass for biofuel production. The cultivation of microalgae is conducted in aquatic systems, therefore microalgae oil production does not compete with agricultural land. Microalgae have greater photosynthetic efficiency than higher plants and are efficient fixing CO{sub 2}. The challenge is to reduce production costs, which can be minimized by increasing productivity and oil biomass. Aiming to increase the production of microalgae biomass, mixotrophic cultivation, with the addition of glycerol has been shown to be very promising. During the production of biodiesel from microalgae there is availability of glycerol as a side product of the transesterification reaction, which could be used as organic carbon source for microalgae mixotrophic growth, resulting in increased biomass productivity. In this paper, to study the effect of glycerol in experimental conditions, the batch culture of the diatom Phaeodactylum tricornutum was performed in a 2-liter flask in a temperature and light intensity controlled room. During 16 days of cultivation, the number of cells per ml was counted periodically in a Neubauer chamber. The calculation of dry biomass in the control experiment (without glycerol) was performed every two days by vacuum filtration. In the dry biomass mixotrophic experiment with glycerol concentration of 1.5 M, the number of cells was assessed similarly in the 10{sup th} and 14{sup th} days of cultivation. Through a volume element methodology, a mathematical model was written to calculate the microalgae growth rate. It was used an equation that describes the influence of irradiation and concentration of nutrients in the growth of microalgae. A simulation time of 16 days was used in the computations, with initial concentration of 0.1 g l{sup -1}. In order to compare
Lin, Sijie; Taylor, Alicia A; Ji, Zhaoxia; Chang, Chong Hyun; Kinsinger, Nichola M; Ueng, William; Walker, Sharon L; Nel, André E
2015-02-24
Although copper-containing nanoparticles are used in commercial products such as fungicides and bactericides, we presently do not understand the environmental impact on other organisms that may be inadvertently exposed. In this study, we used the zebrafish embryo as a screening tool to study the potential impact of two nano Cu-based materials, CuPRO and Kocide, in comparison to nanosized and micron-sized Cu and CuO particles in their pristine form (0-10 ppm) as well as following their transformation in an experimental wastewater treatment system. This was accomplished by construction of a modeled domestic septic tank system from which effluents could be retrieved at different stages following particle introduction (10 ppm). The Cu speciation in the effluent was identified as nondissolvable inorganic Cu(H2PO2)2 and nondiffusible organic Cu by X-ray diffraction, inductively coupled plasma mass spectrometry (ICP-MS), diffusive gradients in thin-films (DGT), and Visual MINTEQ software. While the nanoscale materials, including the commercial particles, were clearly more potent (showing 50% hatching interference above 0.5 ppm) than the micron-scale particulates with no effect on hatching up to 10 ppm, the Cu released from the particles in the septic tank underwent transformation into nonbioavailable species that failed to interfere with the function of the zebrafish embryo hatching enzyme. Moreover, we demonstrate that the addition of humic acid, as an organic carbon component, could lead to a dose-dependent decrease in Cu toxicity in our high content zebrafish embryo screening assay. Thus, the use of zebrafish embryo screening, in combination with the effluents obtained from a modeled exposure environment, enables a bioassay approach to follow the change in the speciation and hazard potential of Cu particles instead of difficult-to-perform direct particle tracking.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2016-07-01
This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Contreras, M. T.; Escauriaza, C. R.
2015-12-01
Rain-induced flash floods are common events in regions close to the southern Andes, in north and central Chile. Rapid urban development combined to the changing climate and ENSO effects have resulted in an alarming proximity of flood-prone streams to densely populated areas in the Andean foothills, increasing the risk for cities and infrastructure. Simulations of rapid floods in these complex watersheds are particularly challenging, especially if there is insufficient geomorphological and hydrometeorological data. In the Quebrada de Ramón, an Andean stream that passes through a highly populated area in the east part of Santiago, Chile, previous events have demonstrated that sediment concentration, flow resistance, and the characteristic temporal and spatial scales of the hydrograph, are important variables to predict the arrival time of the peak discharge, flow velocities and the extension of inundated areas. The objective of this investigation is to improve our understanding of the dynamics of flash floods in the Quebrada de Ramón, quantifying the effects of these factors on the flood propagation. We implement a two-dimensional model based on the shallow water equations (Guerra et al. 2014) modified to account for hyperconcentrated flows over natural topography. We evaluate events of specific return periods and sediment concentrations, using different methodologies to quantify the flow resistance in the channel and floodplains. Through this work we provide a framework for future studies aimed at improving hazard assessment, urban planning, and early warning systems in urban areas near mountain streams with limited data, and affected by rapid flood events. Work supported by Fondecyt grant 1130940 and CONICYT/FONDAP grant 15110017.
Seismic hazard assessment of Iran
Directory of Open Access Journals (Sweden)
M. Ghafory-Ashtiany
1999-06-01
Full Text Available The development of the new seismic hazard map of Iran is based on probabilistic seismic hazard computation using the historical earthquakes data, geology, tectonics, fault activity and seismic source models in Iran. These maps have been prepared to indicate the earthquake hazard of Iran in the form of iso-acceleration contour lines, and seismic hazard zoning, by using current probabilistic procedures. They display the probabilistic estimates of Peak Ground Acceleration (PGA for the return periods of 75 and 475 years. The maps have been divided into intervals of 0.25 degrees in both latitudinal and longitudinal directions to calculate the peak ground acceleration values at each grid point and draw the seismic hazard curves. The results presented in this study will provide the basis for the preparation of seismic risk maps, the estimation of earthquake insurance premiums, and the preliminary site evaluation of critical facilities.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Probabilistic Seismic Hazard Assessment of the Chiapas State (SE Mexico)
Rodríguez-Lomelí, Anabel Georgina; García-Mayordomo, Julián
2015-04-01
The Chiapas State, in southeastern Mexico, is a very active seismic region due to the interaction of three tectonic plates: Northamerica, Cocos and Caribe. We present a probabilistic seismic hazard assessment (PSHA) specifically performed to evaluate seismic hazard in the Chiapas state. The PSHA was based on a composited seismic catalogue homogenized to Mw and was used a logic tree procedure for the consideration of different seismogenic source models and ground motion prediction equations (GMPEs). The results were obtained in terms of peak ground acceleration as well as spectral accelerations. The earthquake catalogue was compiled from the International Seismological Center and the Servicio Sismológico Nacional de México sources. Two different seismogenic source zones (SSZ) models were devised based on a revision of the tectonics of the region and the available geomorphological and geological maps. The SSZ were finally defined by the analysis of geophysical data, resulting two main different SSZ models. The Gutenberg-Richter parameters for each SSZ were calculated from the declustered and homogenized catalogue, while the maximum expected earthquake was assessed from both the catalogue and geological criteria. Several worldwide and regional GMPEs for subduction and crustal zones were revised. For each SSZ model we considered four possible combinations of GMPEs. Finally, hazard was calculated in terms of PGA and SA for 500-, 1000-, and 2500-years return periods for each branch of the logic tree using the CRISIS2007 software. The final hazard maps represent the mean values obtained from the two seismogenic and four attenuation models considered in the logic tree. For the three return periods analyzed, the maps locate the most hazardous areas in the Chiapas Central Pacific Zone, the Pacific Coastal Plain and in the Motagua and Polochic Fault Zone; intermediate hazard values in the Chiapas Batholith Zone and in the Strike-Slip Faults Province. The hazard decreases
Energy Technology Data Exchange (ETDEWEB)
Augustoni, Arnold L.
2004-11-01
A laser hazard analysis and safety assessment was performed for the LASIRISTM Model MAG-501L-670M-1000-45o-K diode laser associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers and the ANSI Standard Z136.6-2000, American National Standard for the Safe Use of Lasers Outdoors. The laser was evaluated for both indoor and outdoor use.
Pradhan, Biswajeet
2010-05-01
This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross
Directory of Open Access Journals (Sweden)
Gianola Daniel
2007-09-01
Full Text Available Abstract Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA may provide an avenue for structuring (covariance matrices, thus reducing the number of parameters needed for describing (codispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (covariance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model.
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2016-07-31
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The result indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Energy Technology Data Exchange (ETDEWEB)
Liu, Xueqin [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875 (China); National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China); School of Social Development and Public Policy, Beijing Normal University, Beijing 100875 (China); Li, Ning [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875 (China); Yuan, Shuai, E-mail: syuan@nmemc.org.cn [National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China); Xu, Ning; Shi, Wenqin; Chen, Weibin [National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China)
2015-12-15
As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54 years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10 years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. - Highlights: • A method to estimate the multidimensional joint return periods is presented. • 2D function allows better fitting results at the lower tail of hazard factors. • Three-dimensional simulation has obvious advantages in extreme value fitting. • Joint return periods are closer to the reality
Institute of Scientific and Technical Information of China (English)
徐庆元; 周小林; 曾志平; 杨小礼
2004-01-01
A new mechanics model, which reveals additional longitudinal force transmission between the continuously welded rails and the bridges, is established on the fact that the influence of the mutual relative displacement among the rail, the sleeper and the beam is taken into account. An example is presented and numerical results are compared. The results show that the additional longitudinal forces calculated with the new model are less than those of the previous, especially in the case of the flexible pier bridges. The new model is also suitable for the analysis of the additional longitudinal force transmission between rails and bridges of ballastless track with small resistance fasteners without taking the sleeper displacement into account, and compared with the ballast bridges, the ballastless bridges have a much stronger additional longitudinal force transmission between the continuously welded rails and the bridges.
Research on Potential Hazards Risk Classification Model in Food%食品潜在危害物风险分级模型研究及应用
Institute of Scientific and Technical Information of China (English)
胡书玉; 黄小龙; 黎绍学; 罗文婷; 黄慧容
2016-01-01
Based on food safety risk monitoring requirements ,combined with food consumption content ,hazards levels and toxicity,potential hazards risk classification model in food was proposed for the first time. The model can quantify the risk of certain hazards in food category ,also applied in milk and dairy products. It can easily i-dentify the key monitoring objects and centralize supervision resources ,also can assist on food security warning and rapid response.%基于食品安全风险监测需要，结合食品消费量、危害物在食品的含量水平与危害物毒性，首次提出食品潜在危害物风险分级模型，量化特定食品类别中各危害物的风险程度，并以乳制品为例进行应用示范。该风险模型不仅可简易地识别重点监测的对象，集中监管资源，也有利于后续的食品安全预警和快速反应。
Directory of Open Access Journals (Sweden)
S. W. Astite
2015-12-01
Full Text Available The aim of the present study is the management of flood risk through the use of cartography of flood hazards by overflowing rivers. This cartography is developed using modern simulation tools namely the hydraulic model (HECRAS as well as the Geographic Information System (ArcGis. The study concerns Oued El Harrach (North of Algeria surrounding area which has been subject to several floods causing significant human and material damage. This loss is a consequence of the use flood zones as habitats for people. This can be avoided in the future by use the mapping of the spatial extent of the flood hazard on the land of the Oued El Harrach. Hence the importance of the cartography developed in this study as an essential tool for decision makers in prevention, protection and management of flood risks.
A Bayesian Seismic Hazard Analysis for the city of Naples
Faenza, Licia; Pierdominici, Simona; Hainzl, Sebastian; Cinti, Francesca R.; Sandri, Laura; Selva, Jacopo; Tonini, Roberto; Perfetti, Paolo
2016-04-01
In the last years many studies have been focused on determination and definition of the seismic, volcanic and tsunamogenic hazard in the city of Naples. The reason is that the town of Naples with its neighboring area is one of the most densely populated places in Italy. In addition, the risk is increased also by the type and condition of buildings and monuments in the city. It is crucial therefore to assess which active faults in Naples and surrounding area could trigger an earthquake able to shake and damage the urban area. We collect data from the most reliable and complete databases of macroseismic intensity records (from 79 AD to present). For each seismic event an active tectonic structure has been associated. Furthermore a set of active faults, well-known from geological investigations, located around the study area that they could shake the city, not associated with any earthquake, has been taken into account for our studies. This geological framework is the starting point for our Bayesian seismic hazard analysis for the city of Naples. We show the feasibility of formulating the hazard assessment procedure to include the information of past earthquakes into the probabilistic seismic hazard analysis. This strategy allows on one hand to enlarge the information used in the evaluation of the hazard, from alternative models for the earthquake generation process to past shaking and on the other hand to explicitly account for all kinds of information and their uncertainties. The Bayesian scheme we propose is applied to evaluate the seismic hazard of Naples. We implement five different spatio-temporal models to parameterize the occurrence of earthquakes potentially dangerous for Naples. Subsequently we combine these hazard curves with ShakeMap of past earthquakes that have been felt in Naples. The results are posterior hazard assessment for three exposure times, e.g., 50, 10 and 5 years, in a dense grid that cover the municipality of Naples, considering bedrock soil
High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-06-01
Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.
High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
Directory of Open Access Journals (Sweden)
D. J. Zawada
2015-03-01
Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.
Louie, Jacob; Shalaby, Amer; Habib, Khandker Nurul
2017-01-01
Most investigations of incident-related delay duration in the transportation context are restricted to highway traffic, with little attention given to delays due to transit service disruptions. Studies of transit-based delay duration are also considerably less comprehensive than their highway counterparts with respect to examining the effects of non-causal variables on the delay duration. However, delays due to incidents in public transit service can have serious consequences on the overall urban transportation system due to the pivotal and vital role of public transit. The ability to predict the durations of various types of transit system incidents is indispensable for better management and mitigation of service disruptions. This paper presents a detailed investigation on incident delay durations in Toronto's subway system over the year 2013, focusing on the effects of the incidents' location and time, the train-type involved, and the non-adherence to proper recovery procedures. Accelerated Failure Time (AFT) hazard models are estimated to investigate the relationship between these factors and the resulting delay duration. The empirical investigation reveals that incident types that impact both safety and operations simultaneously generally have longer expected delays than incident types that impact either safety or operations alone. Incidents at interchange stations are cleared faster than incidents at non-interchange stations. Incidents during peak periods have nearly the same delay durations as off-peak incidents. The estimated models are believed to be useful tools in predicting the relative magnitude of incident delay duration for better management of subway operations.
Energy Technology Data Exchange (ETDEWEB)
Grotjans, H.
1998-04-01
In the current Software Engineering Module (SEM2) three additional test cases have been investigated, as listed in Chapter 2. For all test cases it has been shown that the computed results are grid independent. This has been done by systematic grid refinement studies. The main objective of the current SEM2 was the verification and validation of the new wall function implementation for the k-{epsilon} mode and the SMC-model. Analytical relations and experimental data have been used for comparison of the computational results. The agreement of the results is good. Therefore, the correct implementation of the new wall function has been demonstrated. As the results in this report have shown, a consistent grid refinement can be done for any test case. This is an important improvement for industrial applications, as no model specific requirements must be considered during grid generation. (orig.)
Directory of Open Access Journals (Sweden)
C. Huggel
2003-01-01
Full Text Available Debris flows triggered by glacier lake outbursts have repeatedly caused disasters in various high-mountain regions of the world. Accelerated change of glacial and periglacial environments due to atmospheric warming and increased anthropogenic development in most of these areas raise the need for an adequate hazard assessment and corresponding modelling. The purpose of this paper is to pro-vide a modelling approach which takes into account the current evolution of the glacial environment and satisfies a robust first-order assessment of hazards from glacier-lake outbursts. Two topography-based GIS-models simulating debris flows related to outbursts from glacier lakes are presented and applied for two lake outburst events in the southern Swiss Alps. The models are based on information about glacier lakes derived from remote sensing data, and on digital elevation models (DEM. Hydrological flow routing is used to simulate the debris flow resulting from the lake outburst. Thereby, a multiple- and a single-flow-direction approach are applied. Debris-flow propagation is given in probability-related values indicating the hazard potential of a certain location. The debris flow runout distance is calculated on the basis of empirical data on average slope trajectory. The results show that the multiple-flow-direction approach generally yields a more detailed propagation. The single-flow-direction approach, however, is more robust against DEM artifacts and, hence, more suited for process automation. The model is tested with three differently generated DEMs (including aero-photogrammetry- and satellite image-derived. Potential application of the respective DEMs is discussed with a special focus on satellite-derived DEMs for use in remote high-mountain areas.
Creating a Climate for Linguistically Responsive Instruction: The Case for Additive Models
Rao, Arthi B.; Morales, P. Zitlali
2015-01-01
As a state with a longstanding tradition of offering bilingual education, Illinois has a legislative requirement for native language instruction in earlier grades through a model called Transitional Bilingual Education (TBE). This model does not truly develop bilingualism, however, but rather offers native language instruction to English learners…
DEFF Research Database (Denmark)
Wu, Hao; Jespersen, Jacob Boll; Aho, Martti;
2013-01-01
Potassium chloride, KCl, formed from critical ash-forming elements released during combustion may lead to severe ash deposition and corrosion problems in biomass-fired boilers. Ferric sulfate, Fe2(SO4)3 is an effective additive, which produces sulfur oxides (SO2 and SO3) to convert KCl to the les...
Córdoba, G.; Villarosa, G.; Sheridan, M. F.; Viramonte, J. G.; Beigt, D.; Salmuni, G.
2015-04-01
This paper presents the results of lahar modelling in the town of Villa La Angostura (Neuquén-Argentina) based on the Two-Phase-Titan modelling computer code. The purpose of this exercise is to provide decision makers with a useful tool to assess lahar hazard during the 2011 Puyehue-Cordón Caulle Volcanic Complex eruption. The possible occurrence of lahars mobilized from recent ash falls that could reach the city was analysed. The performance of the Two-Phase-Titan model using 15 m resolution digital elevation models (DEMs) developed from optical satellite images and from radar satellite images was evaluated. The output of these modellings showed inconsistencies that, based on field observations, were attributed to bad adjustment of the DEMs to real topography. Further testing of results using more accurate radar-based 10 m DEM, provided more realistic predictions. This procedure allowed us to simulate the path of flows from Florencia, Las Piedritas and Colorado creeks, which are the most hazardous streams for debris flows in Villa La Angostura. The output of the modelling is a valuable tool for city planning and risk management especially considering the glacial geomorphic features of the region, the strong urban development growth and the land occupation that has occurred in the last decade in Villa La Angostura and its surroundings.
Pretis, F.; Hendry, D. F.
2013-10-01
We outline six important hazards that can be encountered in econometric modelling of time-series data, and apply that analysis to demonstrate errors in the empirical modelling of climate data in Beenstock et al. (2012). We show that the claim made in Beenstock et al. (2012) as to the different degrees of integrability of CO2 and temperature is incorrect. In particular, the level of integration is not constant and not intrinsic to the process. Further, we illustrate that the measure of anthropogenic forcing in Beenstock et al. (2012), a constructed "anthropogenic anomaly", is not appropriate regardless of the time-series properties of the data.
Energy Technology Data Exchange (ETDEWEB)
Alves, Vinicius M. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Muratov, Eugene [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Laboratory of Theoretical Chemistry, A.V. Bogatsky Physical-Chemical Institute NAS of Ukraine, Odessa 65080 (Ukraine); Fourches, Denis [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States); Strickland, Judy; Kleinstreuer, Nicole [ILS/Contractor Supporting the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), P.O. Box 13501, Research Triangle Park, NC 27709 (United States); Andrade, Carolina H. [Laboratory of Molecular Modeling and Design, Faculty of Pharmacy, Federal University of Goiás, Goiânia, GO 74605-220 (Brazil); Tropsha, Alexander, E-mail: alex_tropsha@unc.edu [Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599 (United States)
2015-04-15
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Class